Test Report: Docker_Linux_crio 19872

                    
                      d8c730041b5457cdbe5017f8cce276eb986ed9a4:2024-10-28:36847
                    
                

Test fail (4/330)

Order failed test Duration
36 TestAddons/parallel/Ingress 151.58
38 TestAddons/parallel/MetricsServer 314.36
99 TestFunctional/parallel/PersistentVolumeClaim 189
121 TestFunctional/parallel/ImageCommands/ImageListShort 2.25
x
+
TestAddons/parallel/Ingress (151.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-803184 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-803184 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-803184 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [941aa725-fa4f-4528-ab43-1ad1d9561d08] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [941aa725-fa4f-4528-ab43-1ad1d9561d08] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003440213s
I1028 17:11:40.263137  108914 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-803184 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.510601947s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-803184 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-803184
helpers_test.go:235: (dbg) docker inspect addons-803184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8beae7471f18f3b528980ef294fe11c32142d2c34b446f3c61cf7e2c40d4f6a7",
	        "Created": "2024-10-28T17:07:48.694578747Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 111014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-28T17:07:48.828704388Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b614a1ff29c6e85b537175184edffd528c6bd99b5b0eb51bb6059bd4ad5ba0a2",
	        "ResolvConfPath": "/var/lib/docker/containers/8beae7471f18f3b528980ef294fe11c32142d2c34b446f3c61cf7e2c40d4f6a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8beae7471f18f3b528980ef294fe11c32142d2c34b446f3c61cf7e2c40d4f6a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/8beae7471f18f3b528980ef294fe11c32142d2c34b446f3c61cf7e2c40d4f6a7/hosts",
	        "LogPath": "/var/lib/docker/containers/8beae7471f18f3b528980ef294fe11c32142d2c34b446f3c61cf7e2c40d4f6a7/8beae7471f18f3b528980ef294fe11c32142d2c34b446f3c61cf7e2c40d4f6a7-json.log",
	        "Name": "/addons-803184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-803184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-803184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/642222271cbb99da9a64969a254fb19d9ae6e0fee6b1b57d6ac603c6339654da-init/diff:/var/lib/docker/overlay2/6f44dcb837d0e69b1b3a1c42f8a8e838d4ec916efe93e3f6d6a8c0411f4e43e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/642222271cbb99da9a64969a254fb19d9ae6e0fee6b1b57d6ac603c6339654da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/642222271cbb99da9a64969a254fb19d9ae6e0fee6b1b57d6ac603c6339654da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/642222271cbb99da9a64969a254fb19d9ae6e0fee6b1b57d6ac603c6339654da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-803184",
	                "Source": "/var/lib/docker/volumes/addons-803184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-803184",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-803184",
	                "name.minikube.sigs.k8s.io": "addons-803184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fd0a18dbf335a437e9015f60020c8a0e160ebabba8b9ad55a900b4d1378f85ee",
	            "SandboxKey": "/var/run/docker/netns/fd0a18dbf335",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-803184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c41606722e6f3e1ef41cf3f5ba84835c6a256c1b4bab5daeeca0436af7c726e2",
	                    "EndpointID": "b0c15c4462e901eec2425a62b5c711f7c90e55b4a5a1af61771147dd7062d9c7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-803184",
	                        "8beae7471f18"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-803184 -n addons-803184
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-803184 logs -n 25: (1.158444637s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-328985                                                                     | download-only-328985   | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:07 UTC |
	| start   | --download-only -p                                                                          | download-docker-179742 | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC |                     |
	|         | download-docker-179742                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-179742                                                                   | download-docker-179742 | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:07 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-988801   | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC |                     |
	|         | binary-mirror-988801                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35689                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-988801                                                                     | binary-mirror-988801   | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:07 UTC |
	| addons  | disable dashboard -p                                                                        | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC |                     |
	|         | addons-803184                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC |                     |
	|         | addons-803184                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-803184 --wait=true                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:10 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:10 UTC | 28 Oct 24 17:10 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:10 UTC | 28 Oct 24 17:10 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:10 UTC | 28 Oct 24 17:10 UTC |
	|         | -p addons-803184                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-803184 ip                                                                            | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-803184 addons                                                                        | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-803184 addons                                                                        | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-803184 addons                                                                        | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-803184 ssh cat                                                                       | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | /opt/local-path-provisioner/pvc-6dbabf11-4f7e-4e00-b596-30d9d2fb3ea8_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-803184 ssh curl -s                                                                   | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-803184 addons                                                                        | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-803184 addons                                                                        | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:12 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-803184 ip                                                                            | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:13 UTC | 28 Oct 24 17:13 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:07:24
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:07:24.785481  110282 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:07:24.785605  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:07:24.785614  110282 out.go:358] Setting ErrFile to fd 2...
	I1028 17:07:24.785618  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:07:24.785783  110282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
	I1028 17:07:24.786455  110282 out.go:352] Setting JSON to false
	I1028 17:07:24.787343  110282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2986,"bootTime":1730132259,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:07:24.787452  110282 start.go:139] virtualization: kvm guest
	I1028 17:07:24.819760  110282 out.go:177] * [addons-803184] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:07:24.901159  110282 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:07:24.901163  110282 notify.go:220] Checking for updates...
	I1028 17:07:25.037563  110282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:07:25.122388  110282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig
	I1028 17:07:25.206023  110282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube
	I1028 17:07:25.277619  110282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:07:25.359594  110282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:07:25.423887  110282 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:07:25.444617  110282 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 17:07:25.444732  110282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 17:07:25.491224  110282 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2024-10-28 17:07:25.481654457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 17:07:25.491321  110282 docker.go:318] overlay module found
	I1028 17:07:25.611590  110282 out.go:177] * Using the docker driver based on user configuration
	I1028 17:07:25.683521  110282 start.go:297] selected driver: docker
	I1028 17:07:25.683550  110282 start.go:901] validating driver "docker" against <nil>
	I1028 17:07:25.683565  110282 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:07:25.684391  110282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 17:07:25.731052  110282 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2024-10-28 17:07:25.722023507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 17:07:25.731232  110282 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 17:07:25.731495  110282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:07:25.766395  110282 out.go:177] * Using Docker driver with root privileges
	I1028 17:07:25.809308  110282 cni.go:84] Creating CNI manager for ""
	I1028 17:07:25.809393  110282 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 17:07:25.809405  110282 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 17:07:25.809499  110282 start.go:340] cluster config:
	{Name:addons-803184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-803184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:07:25.851871  110282 out.go:177] * Starting "addons-803184" primary control-plane node in "addons-803184" cluster
	I1028 17:07:25.934036  110282 cache.go:121] Beginning downloading kic base image for docker with crio
	I1028 17:07:26.015885  110282 out.go:177] * Pulling base image v0.0.45-1730110049-19872 ...
	I1028 17:07:26.141305  110282 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 in local docker daemon
	I1028 17:07:26.141315  110282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:07:26.141428  110282 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-102136/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 17:07:26.141443  110282 cache.go:56] Caching tarball of preloaded images
	I1028 17:07:26.141551  110282 preload.go:172] Found /home/jenkins/minikube-integration/19872-102136/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:07:26.141564  110282 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:07:26.141902  110282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/config.json ...
	I1028 17:07:26.141930  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/config.json: {Name:mka4295eb11d0690c289fe7ea69051b27a134fa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:26.157670  110282 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 to local cache
	I1028 17:07:26.157812  110282 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 in local cache directory
	I1028 17:07:26.157837  110282 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 in local cache directory, skipping pull
	I1028 17:07:26.157846  110282 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 exists in cache, skipping pull
	I1028 17:07:26.157862  110282 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 as a tarball
	I1028 17:07:26.157870  110282 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 from local cache
	I1028 17:07:38.623063  110282 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 from cached tarball
	I1028 17:07:38.623106  110282 cache.go:194] Successfully downloaded all kic artifacts
	I1028 17:07:38.623156  110282 start.go:360] acquireMachinesLock for addons-803184: {Name:mkc61bd3c490082ef7b102a5ec0ecfb79ea6ac1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:07:38.623271  110282 start.go:364] duration metric: took 88.743µs to acquireMachinesLock for "addons-803184"
	I1028 17:07:38.623302  110282 start.go:93] Provisioning new machine with config: &{Name:addons-803184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-803184 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:07:38.623372  110282 start.go:125] createHost starting for "" (driver="docker")
	I1028 17:07:38.625399  110282 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1028 17:07:38.625637  110282 start.go:159] libmachine.API.Create for "addons-803184" (driver="docker")
	I1028 17:07:38.625673  110282 client.go:168] LocalClient.Create starting
	I1028 17:07:38.625788  110282 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca.pem
	I1028 17:07:38.795006  110282 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/cert.pem
	I1028 17:07:38.957380  110282 cli_runner.go:164] Run: docker network inspect addons-803184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1028 17:07:38.973849  110282 cli_runner.go:211] docker network inspect addons-803184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1028 17:07:38.973927  110282 network_create.go:284] running [docker network inspect addons-803184] to gather additional debugging logs...
	I1028 17:07:38.973950  110282 cli_runner.go:164] Run: docker network inspect addons-803184
	W1028 17:07:38.989982  110282 cli_runner.go:211] docker network inspect addons-803184 returned with exit code 1
	I1028 17:07:38.990025  110282 network_create.go:287] error running [docker network inspect addons-803184]: docker network inspect addons-803184: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-803184 not found
	I1028 17:07:38.990040  110282 network_create.go:289] output of [docker network inspect addons-803184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-803184 not found
	
	** /stderr **
	I1028 17:07:38.990196  110282 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1028 17:07:39.006877  110282 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021ed1f0}
	I1028 17:07:39.006930  110282 network_create.go:124] attempt to create docker network addons-803184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1028 17:07:39.006994  110282 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-803184 addons-803184
	I1028 17:07:39.072386  110282 network_create.go:108] docker network addons-803184 192.168.49.0/24 created
	I1028 17:07:39.072419  110282 kic.go:121] calculated static IP "192.168.49.2" for the "addons-803184" container
	I1028 17:07:39.072492  110282 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1028 17:07:39.087433  110282 cli_runner.go:164] Run: docker volume create addons-803184 --label name.minikube.sigs.k8s.io=addons-803184 --label created_by.minikube.sigs.k8s.io=true
	I1028 17:07:39.105020  110282 oci.go:103] Successfully created a docker volume addons-803184
	I1028 17:07:39.105148  110282 cli_runner.go:164] Run: docker run --rm --name addons-803184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-803184 --entrypoint /usr/bin/test -v addons-803184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 -d /var/lib
	I1028 17:07:44.089434  110282 cli_runner.go:217] Completed: docker run --rm --name addons-803184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-803184 --entrypoint /usr/bin/test -v addons-803184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 -d /var/lib: (4.984236069s)
	I1028 17:07:44.089467  110282 oci.go:107] Successfully prepared a docker volume addons-803184
	I1028 17:07:44.089483  110282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:07:44.089511  110282 kic.go:194] Starting extracting preloaded images to volume ...
	I1028 17:07:44.089571  110282 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19872-102136/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-803184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 -I lz4 -xf /preloaded.tar -C /extractDir
	I1028 17:07:48.635465  110282 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19872-102136/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-803184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 -I lz4 -xf /preloaded.tar -C /extractDir: (4.545846919s)
	I1028 17:07:48.635502  110282 kic.go:203] duration metric: took 4.545989612s to extract preloaded images to volume ...
	W1028 17:07:48.635640  110282 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1028 17:07:48.635733  110282 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1028 17:07:48.679892  110282 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-803184 --name addons-803184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-803184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-803184 --network addons-803184 --ip 192.168.49.2 --volume addons-803184:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9
	I1028 17:07:49.004091  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Running}}
	I1028 17:07:49.022667  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:07:49.041064  110282 cli_runner.go:164] Run: docker exec addons-803184 stat /var/lib/dpkg/alternatives/iptables
	I1028 17:07:49.082982  110282 oci.go:144] the created container "addons-803184" has a running status.
	I1028 17:07:49.083025  110282 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa...
	I1028 17:07:49.174857  110282 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1028 17:07:49.195421  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:07:49.213492  110282 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1028 17:07:49.213514  110282 kic_runner.go:114] Args: [docker exec --privileged addons-803184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1028 17:07:49.259054  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:07:49.278611  110282 machine.go:93] provisionDockerMachine start ...
	I1028 17:07:49.278704  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:49.296905  110282 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:49.297123  110282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 17:07:49.297142  110282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 17:07:49.297938  110282 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55408->127.0.0.1:32768: read: connection reset by peer
	I1028 17:07:52.415421  110282 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-803184
	
	I1028 17:07:52.415453  110282 ubuntu.go:169] provisioning hostname "addons-803184"
	I1028 17:07:52.415543  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:52.432679  110282 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:52.432889  110282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 17:07:52.432906  110282 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-803184 && echo "addons-803184" | sudo tee /etc/hostname
	I1028 17:07:52.559459  110282 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-803184
	
	I1028 17:07:52.559540  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:52.576689  110282 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:52.576871  110282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 17:07:52.576887  110282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-803184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-803184/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-803184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:07:52.692092  110282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:07:52.692122  110282 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19872-102136/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-102136/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-102136/.minikube}
	I1028 17:07:52.692140  110282 ubuntu.go:177] setting up certificates
	I1028 17:07:52.692151  110282 provision.go:84] configureAuth start
	I1028 17:07:52.692213  110282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-803184
	I1028 17:07:52.708582  110282 provision.go:143] copyHostCerts
	I1028 17:07:52.708673  110282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-102136/.minikube/ca.pem (1078 bytes)
	I1028 17:07:52.708786  110282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-102136/.minikube/cert.pem (1123 bytes)
	I1028 17:07:52.708845  110282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-102136/.minikube/key.pem (1679 bytes)
	I1028 17:07:52.708893  110282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-102136/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca-key.pem org=jenkins.addons-803184 san=[127.0.0.1 192.168.49.2 addons-803184 localhost minikube]
	I1028 17:07:52.894995  110282 provision.go:177] copyRemoteCerts
	I1028 17:07:52.895079  110282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:07:52.895122  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:52.911979  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:07:52.996900  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 17:07:53.020610  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 17:07:53.043655  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 17:07:53.065491  110282 provision.go:87] duration metric: took 373.32663ms to configureAuth
	I1028 17:07:53.065520  110282 ubuntu.go:193] setting minikube options for container-runtime
	I1028 17:07:53.065734  110282 config.go:182] Loaded profile config "addons-803184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:07:53.065851  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:53.081895  110282 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:53.082077  110282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 17:07:53.082093  110282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:07:53.284337  110282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:07:53.284372  110282 machine.go:96] duration metric: took 4.005737045s to provisionDockerMachine
	I1028 17:07:53.284387  110282 client.go:171] duration metric: took 14.658702752s to LocalClient.Create
	I1028 17:07:53.284415  110282 start.go:167] duration metric: took 14.65877684s to libmachine.API.Create "addons-803184"
	I1028 17:07:53.284428  110282 start.go:293] postStartSetup for "addons-803184" (driver="docker")
	I1028 17:07:53.284444  110282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:07:53.284521  110282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:07:53.284579  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:53.302157  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:07:53.392857  110282 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:07:53.396157  110282 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1028 17:07:53.396203  110282 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1028 17:07:53.396215  110282 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1028 17:07:53.396226  110282 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1028 17:07:53.396240  110282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-102136/.minikube/addons for local assets ...
	I1028 17:07:53.396321  110282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-102136/.minikube/files for local assets ...
	I1028 17:07:53.396367  110282 start.go:296] duration metric: took 111.930763ms for postStartSetup
	I1028 17:07:53.396744  110282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-803184
	I1028 17:07:53.413279  110282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/config.json ...
	I1028 17:07:53.413578  110282 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 17:07:53.413625  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:53.430597  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:07:53.512577  110282 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1028 17:07:53.516815  110282 start.go:128] duration metric: took 14.893424884s to createHost
	I1028 17:07:53.516850  110282 start.go:83] releasing machines lock for "addons-803184", held for 14.893563934s
	I1028 17:07:53.516919  110282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-803184
	I1028 17:07:53.533187  110282 ssh_runner.go:195] Run: cat /version.json
	I1028 17:07:53.533248  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:53.533263  110282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:07:53.533331  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:53.550481  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:07:53.551174  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:07:53.631904  110282 ssh_runner.go:195] Run: systemctl --version
	I1028 17:07:53.714102  110282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:07:53.858156  110282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 17:07:53.862436  110282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:07:53.880070  110282 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1028 17:07:53.880150  110282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:07:53.905937  110282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1028 17:07:53.905965  110282 start.go:495] detecting cgroup driver to use...
	I1028 17:07:53.906045  110282 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1028 17:07:53.906114  110282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:07:53.920610  110282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:07:53.931127  110282 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:07:53.931179  110282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:07:53.943658  110282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:07:53.956594  110282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:07:54.033812  110282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:07:54.111427  110282 docker.go:233] disabling docker service ...
	I1028 17:07:54.111497  110282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:07:54.129011  110282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:07:54.139816  110282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:07:54.213475  110282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:07:54.293971  110282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:07:54.304976  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:07:54.319807  110282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:07:54.319875  110282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:54.329521  110282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:07:54.329582  110282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:54.338870  110282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:54.348014  110282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:54.357714  110282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:07:54.366109  110282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:54.375082  110282 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:54.389468  110282 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:54.398204  110282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:07:54.405580  110282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:07:54.412963  110282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:07:54.488723  110282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:07:54.590396  110282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:07:54.590468  110282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:07:54.593988  110282 start.go:563] Will wait 60s for crictl version
	I1028 17:07:54.594045  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:07:54.597236  110282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:07:54.629387  110282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1028 17:07:54.629498  110282 ssh_runner.go:195] Run: crio --version
	I1028 17:07:54.663925  110282 ssh_runner.go:195] Run: crio --version
	I1028 17:07:54.700469  110282 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1028 17:07:54.701791  110282 cli_runner.go:164] Run: docker network inspect addons-803184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1028 17:07:54.718227  110282 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1028 17:07:54.721820  110282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:07:54.732369  110282 kubeadm.go:883] updating cluster {Name:addons-803184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-803184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 17:07:54.732495  110282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:07:54.732544  110282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:07:54.798937  110282 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:07:54.798959  110282 crio.go:433] Images already preloaded, skipping extraction
	I1028 17:07:54.799006  110282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:07:54.830801  110282 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:07:54.830827  110282 cache_images.go:84] Images are preloaded, skipping loading
	I1028 17:07:54.830835  110282 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1028 17:07:54.830923  110282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-803184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-803184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:07:54.830982  110282 ssh_runner.go:195] Run: crio config
	I1028 17:07:54.872355  110282 cni.go:84] Creating CNI manager for ""
	I1028 17:07:54.872379  110282 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 17:07:54.872389  110282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 17:07:54.872411  110282 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-803184 NodeName:addons-803184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 17:07:54.872526  110282 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-803184"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 17:07:54.872583  110282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:07:54.880919  110282 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 17:07:54.880982  110282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 17:07:54.889159  110282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1028 17:07:54.905317  110282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:07:54.921905  110282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1028 17:07:54.938561  110282 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1028 17:07:54.942050  110282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:07:54.951988  110282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:07:55.025295  110282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:07:55.037847  110282 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184 for IP: 192.168.49.2
	I1028 17:07:55.037871  110282 certs.go:194] generating shared ca certs ...
	I1028 17:07:55.037887  110282 certs.go:226] acquiring lock for ca certs: {Name:mke618d91ba42d60684aa6c76238fe0c56bd6c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.038022  110282 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-102136/.minikube/ca.key
	I1028 17:07:55.221170  110282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-102136/.minikube/ca.crt ...
	I1028 17:07:55.221205  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/ca.crt: {Name:mkea952d3a2fb13dbfe6a1ba11e87b0120210fff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.221375  110282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-102136/.minikube/ca.key ...
	I1028 17:07:55.221387  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/ca.key: {Name:mk67b003eb4a44232118a840252d439a691101af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.221456  110282 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-102136/.minikube/proxy-client-ca.key
	I1028 17:07:55.265245  110282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-102136/.minikube/proxy-client-ca.crt ...
	I1028 17:07:55.265279  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/proxy-client-ca.crt: {Name:mka71c04b2d5424d029443f6c74127f148cf7288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.265450  110282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-102136/.minikube/proxy-client-ca.key ...
	I1028 17:07:55.265461  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/proxy-client-ca.key: {Name:mka77769e743f683a6ab4fdb3dd21af12021995d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.265529  110282 certs.go:256] generating profile certs ...
	I1028 17:07:55.265585  110282 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.key
	I1028 17:07:55.265599  110282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt with IP's: []
	I1028 17:07:55.357955  110282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt ...
	I1028 17:07:55.357995  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: {Name:mk8ef6723fb0846854c8585d89a3380fb5acecd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.358233  110282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.key ...
	I1028 17:07:55.358253  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.key: {Name:mka17a90b9e3666830a753c51adf6b99a61b7470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.358351  110282 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.key.40e5cc3b
	I1028 17:07:55.358376  110282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.crt.40e5cc3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1028 17:07:55.455942  110282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.crt.40e5cc3b ...
	I1028 17:07:55.455980  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.crt.40e5cc3b: {Name:mk236bb2e10dd05e3e31388872425979dd48f603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.456171  110282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.key.40e5cc3b ...
	I1028 17:07:55.456188  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.key.40e5cc3b: {Name:mk830c79ae7973f1db99722984141bac65df621c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.456286  110282 certs.go:381] copying /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.crt.40e5cc3b -> /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.crt
	I1028 17:07:55.456386  110282 certs.go:385] copying /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.key.40e5cc3b -> /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.key
	I1028 17:07:55.456459  110282 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.key
	I1028 17:07:55.456484  110282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.crt with IP's: []
	I1028 17:07:55.521505  110282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.crt ...
	I1028 17:07:55.521543  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.crt: {Name:mk4f1190c5c8590c533d5b1dd4dc3bb25b064e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.521745  110282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.key ...
	I1028 17:07:55.521764  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.key: {Name:mkc24ee91e50b508a48f41bf10116699139cf180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.521969  110282 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:07:55.522020  110282 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca.pem (1078 bytes)
	I1028 17:07:55.522064  110282 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:07:55.522100  110282 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/key.pem (1679 bytes)
	I1028 17:07:55.522737  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:07:55.545757  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:07:55.567446  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:07:55.590481  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:07:55.612500  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 17:07:55.633864  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 17:07:55.656440  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:07:55.678426  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:07:55.700702  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:07:55.722438  110282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 17:07:55.738471  110282 ssh_runner.go:195] Run: openssl version
	I1028 17:07:55.743666  110282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:07:55.752326  110282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:07:55.755476  110282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:07:55.755533  110282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:07:55.761785  110282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:07:55.770959  110282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:07:55.774082  110282 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:07:55.774151  110282 kubeadm.go:392] StartCluster: {Name:addons-803184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-803184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:07:55.774240  110282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 17:07:55.774302  110282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 17:07:55.806505  110282 cri.go:89] found id: ""
	I1028 17:07:55.806565  110282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 17:07:55.814904  110282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 17:07:55.823510  110282 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1028 17:07:55.823566  110282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 17:07:55.832283  110282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 17:07:55.832304  110282 kubeadm.go:157] found existing configuration files:
	
	I1028 17:07:55.832356  110282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 17:07:55.840876  110282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 17:07:55.840950  110282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 17:07:55.848915  110282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 17:07:55.856851  110282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 17:07:55.856913  110282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 17:07:55.864561  110282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 17:07:55.872513  110282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 17:07:55.872584  110282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 17:07:55.880329  110282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 17:07:55.888998  110282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 17:07:55.889059  110282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 17:07:55.896652  110282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1028 17:07:55.931107  110282 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 17:07:55.931529  110282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 17:07:55.947254  110282 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1028 17:07:55.947328  110282 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-gcp
	I1028 17:07:55.947375  110282 kubeadm.go:310] OS: Linux
	I1028 17:07:55.947428  110282 kubeadm.go:310] CGROUPS_CPU: enabled
	I1028 17:07:55.947534  110282 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1028 17:07:55.947623  110282 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1028 17:07:55.947712  110282 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1028 17:07:55.947824  110282 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1028 17:07:55.947937  110282 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1028 17:07:55.948003  110282 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1028 17:07:55.948068  110282 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1028 17:07:55.948136  110282 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1028 17:07:55.996704  110282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 17:07:55.996894  110282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 17:07:55.997059  110282 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 17:07:56.002862  110282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 17:07:56.006220  110282 out.go:235]   - Generating certificates and keys ...
	I1028 17:07:56.006349  110282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 17:07:56.006426  110282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 17:07:56.323169  110282 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 17:07:56.676683  110282 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 17:07:56.807956  110282 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 17:07:56.922858  110282 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 17:07:57.203303  110282 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 17:07:57.203420  110282 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-803184 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1028 17:07:57.303190  110282 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 17:07:57.303328  110282 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-803184 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1028 17:07:57.383570  110282 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 17:07:57.471883  110282 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 17:07:57.563114  110282 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 17:07:57.563208  110282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 17:07:57.714733  110282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 17:07:57.908204  110282 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 17:07:58.340533  110282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 17:07:58.395621  110282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 17:07:58.628328  110282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 17:07:58.628791  110282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 17:07:58.631339  110282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 17:07:58.633252  110282 out.go:235]   - Booting up control plane ...
	I1028 17:07:58.633386  110282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 17:07:58.633515  110282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 17:07:58.634054  110282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 17:07:58.642867  110282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 17:07:58.648133  110282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 17:07:58.648233  110282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 17:07:58.727624  110282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 17:07:58.727900  110282 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 17:07:59.229173  110282 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.66073ms
	I1028 17:07:59.229301  110282 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 17:08:03.730581  110282 kubeadm.go:310] [api-check] The API server is healthy after 4.501368226s
	I1028 17:08:03.742206  110282 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 17:08:03.754514  110282 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 17:08:03.772913  110282 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 17:08:03.773130  110282 kubeadm.go:310] [mark-control-plane] Marking the node addons-803184 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 17:08:03.782672  110282 kubeadm.go:310] [bootstrap-token] Using token: mi4vsm.4k04m6igvyo5znl6
	I1028 17:08:03.785272  110282 out.go:235]   - Configuring RBAC rules ...
	I1028 17:08:03.785438  110282 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 17:08:03.790737  110282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 17:08:03.798163  110282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 17:08:03.803283  110282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 17:08:03.806136  110282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 17:08:03.808875  110282 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 17:08:04.136044  110282 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 17:08:04.561039  110282 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 17:08:05.138770  110282 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 17:08:05.139869  110282 kubeadm.go:310] 
	I1028 17:08:05.139960  110282 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 17:08:05.139978  110282 kubeadm.go:310] 
	I1028 17:08:05.140067  110282 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 17:08:05.140076  110282 kubeadm.go:310] 
	I1028 17:08:05.140107  110282 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 17:08:05.140198  110282 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 17:08:05.140265  110282 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 17:08:05.140274  110282 kubeadm.go:310] 
	I1028 17:08:05.140346  110282 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 17:08:05.140384  110282 kubeadm.go:310] 
	I1028 17:08:05.140441  110282 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 17:08:05.140456  110282 kubeadm.go:310] 
	I1028 17:08:05.140520  110282 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 17:08:05.140616  110282 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 17:08:05.140671  110282 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 17:08:05.140678  110282 kubeadm.go:310] 
	I1028 17:08:05.140746  110282 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 17:08:05.140854  110282 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 17:08:05.140869  110282 kubeadm.go:310] 
	I1028 17:08:05.140998  110282 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mi4vsm.4k04m6igvyo5znl6 \
	I1028 17:08:05.141140  110282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0dd8f5c133ceac1a3915b25678ee9c11eaa82810533cc630f757b22eb21d5ee3 \
	I1028 17:08:05.141170  110282 kubeadm.go:310] 	--control-plane 
	I1028 17:08:05.141179  110282 kubeadm.go:310] 
	I1028 17:08:05.141281  110282 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 17:08:05.141291  110282 kubeadm.go:310] 
	I1028 17:08:05.141386  110282 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mi4vsm.4k04m6igvyo5znl6 \
	I1028 17:08:05.141515  110282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0dd8f5c133ceac1a3915b25678ee9c11eaa82810533cc630f757b22eb21d5ee3 
	I1028 17:08:05.143812  110282 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-gcp\n", err: exit status 1
	I1028 17:08:05.143940  110282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 17:08:05.143961  110282 cni.go:84] Creating CNI manager for ""
	I1028 17:08:05.143970  110282 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 17:08:05.146476  110282 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 17:08:05.147940  110282 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 17:08:05.151584  110282 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 17:08:05.151604  110282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 17:08:05.168663  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 17:08:05.362839  110282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 17:08:05.362931  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:05.362967  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-803184 minikube.k8s.io/updated_at=2024_10_28T17_08_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=addons-803184 minikube.k8s.io/primary=true
	I1028 17:08:05.370342  110282 ops.go:34] apiserver oom_adj: -16
	I1028 17:08:05.459280  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:05.960312  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:06.459536  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:06.960298  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:07.459941  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:07.960246  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:08.459817  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:08.959816  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:09.459535  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:09.960088  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:10.033473  110282 kubeadm.go:1113] duration metric: took 4.670616888s to wait for elevateKubeSystemPrivileges
	I1028 17:08:10.033515  110282 kubeadm.go:394] duration metric: took 14.259369739s to StartCluster
	I1028 17:08:10.033539  110282 settings.go:142] acquiring lock: {Name:mk5660b45458ca6389d875a5473d75a5cb1d1df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:08:10.033662  110282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-102136/kubeconfig
	I1028 17:08:10.034180  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/kubeconfig: {Name:mk9c3758014b9f711e0c502c4f4a5172f5e22b45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:08:10.034483  110282 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:08:10.034700  110282 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1028 17:08:10.034830  110282 addons.go:69] Setting yakd=true in profile "addons-803184"
	I1028 17:08:10.034854  110282 addons.go:234] Setting addon yakd=true in "addons-803184"
	I1028 17:08:10.034884  110282 config.go:182] Loaded profile config "addons-803184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:08:10.034902  110282 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-803184"
	I1028 17:08:10.034914  110282 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-803184"
	I1028 17:08:10.034889  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.034937  110282 addons.go:69] Setting cloud-spanner=true in profile "addons-803184"
	I1028 17:08:10.034949  110282 addons.go:234] Setting addon cloud-spanner=true in "addons-803184"
	I1028 17:08:10.034963  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.035519  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.034771  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 17:08:10.034930  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.035613  110282 addons.go:69] Setting metrics-server=true in profile "addons-803184"
	I1028 17:08:10.035638  110282 addons.go:234] Setting addon metrics-server=true in "addons-803184"
	I1028 17:08:10.035678  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.035749  110282 addons.go:69] Setting storage-provisioner=true in profile "addons-803184"
	I1028 17:08:10.035777  110282 addons.go:234] Setting addon storage-provisioner=true in "addons-803184"
	I1028 17:08:10.035770  110282 addons.go:69] Setting gcp-auth=true in profile "addons-803184"
	I1028 17:08:10.035835  110282 mustload.go:65] Loading cluster: addons-803184
	I1028 17:08:10.035847  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.036027  110282 config.go:182] Loaded profile config "addons-803184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:08:10.036127  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.036146  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.036243  110282 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-803184"
	I1028 17:08:10.036290  110282 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-803184"
	I1028 17:08:10.036311  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.036317  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.036327  110282 addons.go:69] Setting default-storageclass=true in profile "addons-803184"
	I1028 17:08:10.036345  110282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-803184"
	I1028 17:08:10.036602  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.036774  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.036982  110282 addons.go:69] Setting volcano=true in profile "addons-803184"
	I1028 17:08:10.037007  110282 addons.go:234] Setting addon volcano=true in "addons-803184"
	I1028 17:08:10.037039  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.037510  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.037705  110282 addons.go:69] Setting volumesnapshots=true in profile "addons-803184"
	I1028 17:08:10.037725  110282 addons.go:234] Setting addon volumesnapshots=true in "addons-803184"
	I1028 17:08:10.037765  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.038223  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.039075  110282 addons.go:69] Setting inspektor-gadget=true in profile "addons-803184"
	I1028 17:08:10.039096  110282 addons.go:234] Setting addon inspektor-gadget=true in "addons-803184"
	I1028 17:08:10.039130  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.039626  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.036312  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.041997  110282 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-803184"
	I1028 17:08:10.042028  110282 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-803184"
	I1028 17:08:10.042154  110282 addons.go:69] Setting ingress=true in profile "addons-803184"
	I1028 17:08:10.042216  110282 addons.go:234] Setting addon ingress=true in "addons-803184"
	I1028 17:08:10.042296  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.042436  110282 addons.go:69] Setting ingress-dns=true in profile "addons-803184"
	I1028 17:08:10.042517  110282 addons.go:234] Setting addon ingress-dns=true in "addons-803184"
	I1028 17:08:10.042591  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.042637  110282 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-803184"
	I1028 17:08:10.042668  110282 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-803184"
	I1028 17:08:10.042727  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.043329  110282 addons.go:69] Setting registry=true in profile "addons-803184"
	I1028 17:08:10.043383  110282 addons.go:234] Setting addon registry=true in "addons-803184"
	I1028 17:08:10.043425  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.043507  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.047670  110282 out.go:177] * Verifying Kubernetes components...
	I1028 17:08:10.035522  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.050323  110282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:08:10.068259  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.068260  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.068644  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.069239  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.081727  110282 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1028 17:08:10.083030  110282 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 17:08:10.083067  110282 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 17:08:10.083146  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.088652  110282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 17:08:10.090159  110282 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:08:10.090184  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 17:08:10.090249  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.107297  110282 addons.go:234] Setting addon default-storageclass=true in "addons-803184"
	I1028 17:08:10.107351  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.107741  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	W1028 17:08:10.108019  110282 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1028 17:08:10.109133  110282 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1028 17:08:10.109198  110282 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1028 17:08:10.109645  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1028 17:08:10.110377  110282 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1028 17:08:10.121181  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1028 17:08:10.121253  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.122995  110282 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1028 17:08:10.123118  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1028 17:08:10.123376  110282 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 17:08:10.123520  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.124551  110282 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 17:08:10.124569  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1028 17:08:10.124619  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.123566  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1028 17:08:10.125004  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.126005  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1028 17:08:10.127256  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1028 17:08:10.127325  110282 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1028 17:08:10.132040  110282 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1028 17:08:10.132072  110282 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1028 17:08:10.132145  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.132302  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1028 17:08:10.132465  110282 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 17:08:10.133626  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.134267  110282 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1028 17:08:10.138924  110282 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 17:08:10.139065  110282 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1028 17:08:10.139081  110282 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1028 17:08:10.139149  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.139370  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1028 17:08:10.139422  110282 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1028 17:08:10.141042  110282 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 17:08:10.141074  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1028 17:08:10.141128  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.141349  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1028 17:08:10.141461  110282 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1028 17:08:10.141475  110282 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1028 17:08:10.141538  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.146115  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1028 17:08:10.147423  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1028 17:08:10.148608  110282 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1028 17:08:10.148630  110282 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1028 17:08:10.148750  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.149852  110282 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1028 17:08:10.150861  110282 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1028 17:08:10.152107  110282 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 17:08:10.152125  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1028 17:08:10.152168  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.152285  110282 out.go:177]   - Using image docker.io/registry:2.8.3
	I1028 17:08:10.153499  110282 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1028 17:08:10.153523  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1028 17:08:10.153582  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.162368  110282 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-803184"
	I1028 17:08:10.162419  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.162858  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.163983  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.169440  110282 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 17:08:10.169462  110282 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 17:08:10.169508  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.186995  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.198927  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.204289  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.204281  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.207311  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.208995  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.214507  110282 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1028 17:08:10.214723  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.217114  110282 out.go:177]   - Using image docker.io/busybox:stable
	I1028 17:08:10.217730  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.217763  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.218470  110282 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 17:08:10.218490  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1028 17:08:10.218547  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.218812  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.220243  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.252010  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.438240  110282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:08:10.438394  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 17:08:10.447826  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:08:10.538337  110282 node_ready.go:35] waiting up to 6m0s for node "addons-803184" to be "Ready" ...
	I1028 17:08:10.638532  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1028 17:08:10.646721  110282 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1028 17:08:10.646805  110282 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1028 17:08:10.651033  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 17:08:10.729834  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 17:08:10.729960  110282 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1028 17:08:10.729981  110282 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1028 17:08:10.732574  110282 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1028 17:08:10.732601  110282 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1028 17:08:10.739340  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 17:08:10.744874  110282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 17:08:10.744902  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1028 17:08:10.749961  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 17:08:10.755927  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 17:08:10.829282  110282 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1028 17:08:10.829382  110282 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1028 17:08:10.829853  110282 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1028 17:08:10.829926  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1028 17:08:10.846589  110282 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1028 17:08:10.846621  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1028 17:08:10.849286  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 17:08:10.947723  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1028 17:08:10.949137  110282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 17:08:10.949223  110282 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 17:08:11.028742  110282 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1028 17:08:11.028841  110282 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1028 17:08:11.145157  110282 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1028 17:08:11.145251  110282 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1028 17:08:11.229707  110282 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1028 17:08:11.229801  110282 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1028 17:08:11.234100  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1028 17:08:11.247204  110282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 17:08:11.247310  110282 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 17:08:11.529576  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 17:08:11.541795  110282 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1028 17:08:11.541824  110282 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1028 17:08:11.629260  110282 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1028 17:08:11.629360  110282 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1028 17:08:11.732222  110282 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1028 17:08:11.732324  110282 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1028 17:08:11.939740  110282 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.501301793s)
	I1028 17:08:11.939943  110282 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1028 17:08:12.134520  110282 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1028 17:08:12.134615  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1028 17:08:12.335863  110282 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1028 17:08:12.335965  110282 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1028 17:08:12.340811  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1028 17:08:12.346103  110282 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1028 17:08:12.346134  110282 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1028 17:08:12.735694  110282 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-803184" context rescaled to 1 replicas
	I1028 17:08:12.841169  110282 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1028 17:08:12.841201  110282 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1028 17:08:12.842123  110282 node_ready.go:53] node "addons-803184" has status "Ready":"False"
	I1028 17:08:13.032044  110282 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1028 17:08:13.032140  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1028 17:08:13.141442  110282 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 17:08:13.141534  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1028 17:08:13.529469  110282 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1028 17:08:13.529501  110282 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1028 17:08:13.628578  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 17:08:13.734129  110282 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1028 17:08:13.734171  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1028 17:08:13.846625  110282 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1028 17:08:13.846656  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1028 17:08:14.029797  110282 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 17:08:14.029885  110282 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1028 17:08:14.129474  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 17:08:14.241465  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.793587157s)
	I1028 17:08:14.241592  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.60302688s)
	I1028 17:08:15.050271  110282 node_ready.go:53] node "addons-803184" has status "Ready":"False"
	I1028 17:08:15.656273  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.005145645s)
	I1028 17:08:15.656315  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.926445805s)
	I1028 17:08:15.656339  110282 addons.go:475] Verifying addon ingress=true in "addons-803184"
	I1028 17:08:15.656360  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.916992637s)
	I1028 17:08:15.656430  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.906429557s)
	I1028 17:08:15.656665  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.900702293s)
	I1028 17:08:15.656714  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.807390885s)
	I1028 17:08:15.656867  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.709103357s)
	I1028 17:08:15.656900  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.422717063s)
	I1028 17:08:15.656918  110282 addons.go:475] Verifying addon registry=true in "addons-803184"
	I1028 17:08:15.656975  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.127301939s)
	I1028 17:08:15.656992  110282 addons.go:475] Verifying addon metrics-server=true in "addons-803184"
	I1028 17:08:15.657177  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.316281384s)
	I1028 17:08:15.657967  110282 out.go:177] * Verifying registry addon...
	I1028 17:08:15.657993  110282 out.go:177] * Verifying ingress addon...
	I1028 17:08:15.659081  110282 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-803184 service yakd-dashboard -n yakd-dashboard
	
	I1028 17:08:15.661324  110282 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1028 17:08:15.661322  110282 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1028 17:08:15.731593  110282 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1028 17:08:15.731629  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:15.731900  110282 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1028 17:08:15.731969  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1028 17:08:15.732213  110282 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1028 17:08:16.167702  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:16.168403  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:16.455236  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.826535338s)
	W1028 17:08:16.455289  110282 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 17:08:16.455322  110282 retry.go:31] will retry after 337.107711ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 17:08:16.665404  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:16.666016  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:16.761544  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.631944388s)
	I1028 17:08:16.761593  110282 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-803184"
	I1028 17:08:16.763524  110282 out.go:177] * Verifying csi-hostpath-driver addon...
	I1028 17:08:16.766034  110282 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1028 17:08:16.770612  110282 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1028 17:08:16.770640  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:16.793453  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 17:08:17.165808  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:17.166300  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:17.269664  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:17.331635  110282 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1028 17:08:17.331707  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:17.349113  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:17.445621  110282 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1028 17:08:17.462282  110282 addons.go:234] Setting addon gcp-auth=true in "addons-803184"
	I1028 17:08:17.462347  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:17.462709  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:17.479911  110282 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1028 17:08:17.479971  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:17.496691  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:17.541608  110282 node_ready.go:53] node "addons-803184" has status "Ready":"False"
	I1028 17:08:17.664947  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:17.665374  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:17.769985  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:18.165187  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:18.165520  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:18.270092  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:18.664773  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:18.665133  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:18.770002  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:19.165697  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:19.166161  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:19.269902  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:19.270771  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.47726362s)
	I1028 17:08:19.270841  110282 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.790897948s)
	I1028 17:08:19.272848  110282 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1028 17:08:19.274458  110282 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 17:08:19.275859  110282 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1028 17:08:19.275884  110282 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1028 17:08:19.293289  110282 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1028 17:08:19.293321  110282 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1028 17:08:19.309748  110282 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 17:08:19.309772  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1028 17:08:19.326199  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 17:08:19.542127  110282 node_ready.go:53] node "addons-803184" has status "Ready":"False"
	I1028 17:08:19.642364  110282 addons.go:475] Verifying addon gcp-auth=true in "addons-803184"
	I1028 17:08:19.643993  110282 out.go:177] * Verifying gcp-auth addon...
	I1028 17:08:19.646529  110282 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1028 17:08:19.649679  110282 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1028 17:08:19.649700  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:19.750743  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:19.751179  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:19.769590  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:20.150089  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:20.164819  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:20.165152  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:20.269630  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:20.650295  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:20.665058  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:20.665431  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:20.769960  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:21.149295  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:21.165015  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:21.165404  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:21.270003  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:21.650198  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:21.664869  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:21.665391  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:21.770015  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:22.041617  110282 node_ready.go:53] node "addons-803184" has status "Ready":"False"
	I1028 17:08:22.150265  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:22.164868  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:22.165426  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:22.269976  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:22.650008  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:22.664750  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:22.665180  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:22.770035  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:23.150562  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:23.165229  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:23.165781  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:23.269524  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:23.652291  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:23.670641  110282 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1028 17:08:23.670667  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:23.671005  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:23.771105  110282 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1028 17:08:23.771136  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:24.045055  110282 node_ready.go:49] node "addons-803184" has status "Ready":"True"
	I1028 17:08:24.045081  110282 node_ready.go:38] duration metric: took 13.506692553s for node "addons-803184" to be "Ready" ...
	I1028 17:08:24.045091  110282 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:08:24.054058  110282 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:24.149948  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:24.165238  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:24.165523  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:24.272123  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:24.652300  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:24.753473  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:24.753500  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:24.853680  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:25.150596  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:25.165516  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:25.165908  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:25.271463  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:25.651310  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:25.664845  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:25.665474  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:25.834244  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:26.060275  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:26.150653  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:26.165879  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:26.166112  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:26.271426  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:26.650617  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:26.666303  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:26.666598  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:26.770653  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:27.151295  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:27.165496  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:27.165637  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:27.271133  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:27.650463  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:27.665150  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:27.665422  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:27.770907  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:28.151054  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:28.166459  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:28.166768  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:28.271052  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:28.559626  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:28.650419  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:28.665455  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:28.665695  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:28.772752  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:29.150934  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:29.164585  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:29.164937  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:29.270971  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:29.650598  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:29.665906  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:29.666341  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:29.771405  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:30.150921  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:30.230271  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:30.230498  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:30.331230  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:30.560877  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:30.650773  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:30.664712  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:30.664950  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:30.771350  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:31.151020  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:31.165288  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:31.165585  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:31.271574  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:31.650632  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:31.665903  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:31.666517  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:31.770698  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:32.150425  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:32.165986  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:32.166311  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:32.271144  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:32.651132  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:32.665648  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:32.665724  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:32.771538  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:33.060192  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:33.150828  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:33.165482  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:33.165726  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:33.270875  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:33.650777  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:33.665000  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:33.665309  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:33.772549  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:34.150488  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:34.166123  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:34.167613  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:34.271246  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:34.650347  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:34.665686  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:34.666301  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:34.770640  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:35.060635  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:35.150801  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:35.164844  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:35.165261  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:35.271100  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:35.650776  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:35.664668  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:35.664768  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:35.770032  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:36.150503  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:36.165760  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:36.165974  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:36.270464  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:36.650403  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:36.665070  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:36.665376  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:36.770789  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:37.150711  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:37.164533  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:37.164903  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:37.271490  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:37.560106  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:37.649951  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:37.664987  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:37.665177  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:37.770949  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:38.150514  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:38.165644  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:38.165715  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:38.271658  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:38.649860  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:38.665709  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:38.666911  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:38.770777  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:39.150540  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:39.165486  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:39.165593  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:39.270982  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:39.560324  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:39.650014  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:39.665102  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:39.665721  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:39.770686  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:40.150089  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:40.164974  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:40.165219  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:40.271672  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:40.650863  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:40.664724  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:40.665238  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:40.770391  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:41.151138  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:41.165282  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:41.165549  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:41.270608  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:41.650022  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:41.665355  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:41.665767  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:41.771118  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:42.059739  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:42.150929  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:42.164869  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:42.165444  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:42.270595  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:42.649869  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:42.665048  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:42.665197  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:42.770305  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:43.151478  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:43.166221  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:43.166449  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:43.270932  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:43.650377  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:43.721368  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:43.721884  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:43.771095  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:44.150078  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:44.165256  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:44.165620  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:44.270757  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:44.560507  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:44.650255  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:44.665134  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:44.665395  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:44.771285  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:45.150254  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:45.165441  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:45.165547  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:45.271473  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:45.649689  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:45.664874  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:45.665251  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:45.770467  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:46.150651  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:46.165878  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:46.166280  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:46.270485  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:46.560771  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:46.650597  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:46.666432  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:46.666971  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:46.771260  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:47.151218  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:47.166352  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:47.167589  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:47.271728  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:47.649954  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:47.665410  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:47.665589  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:47.770814  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:48.149943  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:48.165667  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:48.166180  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:48.271418  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:48.650464  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:48.665790  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:48.665990  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:48.770247  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:49.060485  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:49.151287  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:49.165025  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:49.165213  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:49.270636  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:49.649855  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:49.664908  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:49.665340  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:49.770688  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:50.150818  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:50.165752  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:50.166372  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:50.271215  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:50.651021  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:50.665050  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:50.665201  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:50.785036  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:51.150229  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:51.165427  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:51.165899  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:51.270889  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:51.560455  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:51.650142  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:51.665108  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:51.665434  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:51.776425  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:52.150914  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:52.164951  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:52.165706  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:52.270191  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:52.650539  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:52.665635  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:52.665975  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:52.771184  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:53.150413  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:53.165599  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:53.165858  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:53.269940  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:53.650517  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:53.665497  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:53.665817  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:53.769691  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:54.060431  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:54.151223  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:54.166212  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:54.166659  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:54.271318  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:54.650586  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:54.665510  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:54.665708  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:54.770167  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:55.150116  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:55.165176  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:55.165434  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:55.270372  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:55.650776  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:55.665069  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:55.665424  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:55.770745  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:56.150294  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:56.165152  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:56.165430  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:56.270855  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:56.559561  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:56.650813  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:56.664537  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:56.664913  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:56.770385  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:57.150893  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:57.164820  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:57.165154  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:57.271199  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:57.650178  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:57.664894  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:57.665138  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:57.770678  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:58.149567  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:58.165886  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:58.166264  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:58.271120  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:58.559927  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:58.649821  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:58.666445  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:58.667004  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:58.771079  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:59.150540  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:59.165464  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:59.165808  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:59.270856  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:59.650398  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:59.665740  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:59.666070  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:59.770023  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:00.150782  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:00.165086  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:00.165337  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:00.270376  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:00.650608  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:00.665182  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:00.665668  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:00.770551  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:01.073723  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:01.200848  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:01.200969  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:01.201534  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:01.272656  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:01.649597  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:01.665928  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:01.666504  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:01.771543  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:02.150560  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:02.165169  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:02.165528  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:02.271095  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:02.650506  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:02.665349  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:02.665781  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:02.770762  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:03.060648  110282 pod_ready.go:93] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:03.060672  110282 pod_ready.go:82] duration metric: took 39.006583775s for pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.060683  110282 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mc8s8" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.065593  110282 pod_ready.go:93] pod "coredns-7c65d6cfc9-mc8s8" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:03.065620  110282 pod_ready.go:82] duration metric: took 4.930204ms for pod "coredns-7c65d6cfc9-mc8s8" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.065642  110282 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.069861  110282 pod_ready.go:93] pod "etcd-addons-803184" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:03.069879  110282 pod_ready.go:82] duration metric: took 4.230851ms for pod "etcd-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.069891  110282 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.074031  110282 pod_ready.go:93] pod "kube-apiserver-addons-803184" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:03.074075  110282 pod_ready.go:82] duration metric: took 4.177055ms for pod "kube-apiserver-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.074086  110282 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.078207  110282 pod_ready.go:93] pod "kube-controller-manager-addons-803184" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:03.078232  110282 pod_ready.go:82] duration metric: took 4.140902ms for pod "kube-controller-manager-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.078245  110282 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rlsxn" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.150077  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:03.165065  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:03.165547  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:03.270949  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:03.458646  110282 pod_ready.go:93] pod "kube-proxy-rlsxn" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:03.458673  110282 pod_ready.go:82] duration metric: took 380.420923ms for pod "kube-proxy-rlsxn" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.458686  110282 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.651103  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:03.665097  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:03.665443  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:03.770559  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:03.858538  110282 pod_ready.go:93] pod "kube-scheduler-addons-803184" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:03.858565  110282 pod_ready.go:82] duration metric: took 399.869817ms for pod "kube-scheduler-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.858580  110282 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:04.150380  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:04.165292  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:04.165609  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:04.271891  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:04.649881  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:04.664818  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:04.665370  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:04.770340  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:05.150824  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:05.164761  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:05.164924  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:05.270324  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:05.650696  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:05.664567  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:05.665146  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:05.769868  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:05.864209  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:06.150543  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:06.165910  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:06.166363  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:06.269872  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:06.650552  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:06.665886  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:06.666461  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:06.770266  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:07.150892  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:07.165574  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:07.165791  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:07.270956  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:07.650024  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:07.732116  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:07.732658  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:07.834083  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:08.034902  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:08.233421  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:08.234539  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:08.235973  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:08.332535  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:08.650561  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:08.732416  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:08.732712  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:08.831953  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:09.150755  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:09.165158  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:09.165346  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:09.271484  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:09.650785  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:09.665833  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:09.666060  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:09.770670  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:10.150542  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:10.166495  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:10.166974  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:10.271347  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:10.365782  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:10.650816  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:10.664767  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:10.665482  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:10.770074  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:11.150598  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:11.166082  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:11.166591  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:11.270728  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:11.650322  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:11.665709  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:11.665927  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:11.769969  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:12.149471  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:12.165650  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:12.165879  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:12.270959  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:12.650090  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:12.751676  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:12.752128  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:12.771140  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:12.864301  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:13.150655  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:13.252203  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:13.252501  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:13.270442  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:13.650528  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:13.665619  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:13.665868  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:13.769550  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:14.150060  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:14.165130  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:14.165247  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:14.271009  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:14.650417  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:14.665383  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:14.665690  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:14.770375  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:14.865451  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:15.149979  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:15.165098  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:15.165362  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:15.270859  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:15.650189  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:15.665465  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:15.665641  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:15.771033  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:16.150495  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:16.165459  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:16.165689  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:16.269789  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:16.650472  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:16.665657  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:16.666189  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:16.772068  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:17.150581  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:17.165529  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:17.165907  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:17.270416  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:17.365088  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:17.650170  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:17.665176  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:17.665542  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:17.771434  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:18.151047  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:18.232586  110282 kapi.go:107] duration metric: took 1m2.571261706s to wait for kubernetes.io/minikube-addons=registry ...
	I1028 17:09:18.232964  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:18.336202  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:18.649906  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:18.666599  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:18.832504  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:19.151407  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:19.232414  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:19.331872  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:19.434388  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:19.649861  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:19.665502  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:19.770926  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:20.150214  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:20.166116  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:20.270511  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:20.650208  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:20.666219  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:20.770561  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:21.150163  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:21.165564  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:21.271396  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:21.650013  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:21.665229  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:21.771025  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:21.864379  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:22.150736  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:22.166182  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:22.270335  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:22.650262  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:22.665382  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:22.771088  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:23.186257  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:23.187099  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:23.291096  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:23.650663  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:23.664923  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:23.769956  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:23.864581  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:24.150532  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:24.166506  110282 kapi.go:107] duration metric: took 1m8.505173263s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1028 17:09:24.276332  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:24.651561  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:24.832322  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:25.150352  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:25.271587  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:25.650283  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:25.772081  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:25.865108  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:26.150447  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:26.271363  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:26.650537  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:26.769940  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:27.149898  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:27.271080  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:27.650700  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:27.770629  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:28.150687  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:28.270280  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:28.364649  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:28.649693  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:28.770447  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:29.149680  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:29.270476  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:29.650355  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:29.771431  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:30.150319  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:30.271479  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:30.364948  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:30.650091  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:30.773330  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:31.151540  110282 kapi.go:107] duration metric: took 1m11.505006517s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1028 17:09:31.153288  110282 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-803184 cluster.
	I1028 17:09:31.154690  110282 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1028 17:09:31.156335  110282 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1028 17:09:31.269695  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:31.771002  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:32.270697  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:32.365206  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:32.770626  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:33.270128  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:33.773103  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:34.271194  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:34.365242  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:34.770899  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:35.271158  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:35.771910  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:36.271050  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:36.770823  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:36.864611  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:37.271020  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:37.770744  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:38.271237  110282 kapi.go:107] duration metric: took 1m21.505202879s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1028 17:09:38.273097  110282 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1028 17:09:38.274635  110282 addons.go:510] duration metric: took 1m28.23993761s for enable addons: enabled=[storage-provisioner cloud-spanner ingress-dns nvidia-device-plugin amd-gpu-device-plugin inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1028 17:09:39.364982  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:41.864957  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:44.365046  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:46.864952  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:49.364524  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:51.864961  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:54.364138  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:56.364966  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:58.864722  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:10:01.365015  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:10:03.864227  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:10:04.364848  110282 pod_ready.go:93] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"True"
	I1028 17:10:04.364876  110282 pod_ready.go:82] duration metric: took 1m0.50628719s for pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace to be "Ready" ...
	I1028 17:10:04.364891  110282 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-z7q9t" in "kube-system" namespace to be "Ready" ...
	I1028 17:10:04.369378  110282 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-z7q9t" in "kube-system" namespace has status "Ready":"True"
	I1028 17:10:04.369402  110282 pod_ready.go:82] duration metric: took 4.503001ms for pod "nvidia-device-plugin-daemonset-z7q9t" in "kube-system" namespace to be "Ready" ...
	I1028 17:10:04.369424  110282 pod_ready.go:39] duration metric: took 1m40.324322498s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:10:04.369447  110282 api_server.go:52] waiting for apiserver process to appear ...
	I1028 17:10:04.369485  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 17:10:04.369563  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 17:10:04.405890  110282 cri.go:89] found id: "3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f"
	I1028 17:10:04.405917  110282 cri.go:89] found id: ""
	I1028 17:10:04.405927  110282 logs.go:282] 1 containers: [3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f]
	I1028 17:10:04.405981  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:04.409531  110282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 17:10:04.409604  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 17:10:04.448147  110282 cri.go:89] found id: "73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4"
	I1028 17:10:04.448173  110282 cri.go:89] found id: ""
	I1028 17:10:04.448181  110282 logs.go:282] 1 containers: [73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4]
	I1028 17:10:04.448227  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:04.451666  110282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 17:10:04.451728  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 17:10:04.486717  110282 cri.go:89] found id: "e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d"
	I1028 17:10:04.486738  110282 cri.go:89] found id: ""
	I1028 17:10:04.486746  110282 logs.go:282] 1 containers: [e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d]
	I1028 17:10:04.486800  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:04.490300  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 17:10:04.490359  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 17:10:04.522706  110282 cri.go:89] found id: "435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c"
	I1028 17:10:04.522735  110282 cri.go:89] found id: ""
	I1028 17:10:04.522744  110282 logs.go:282] 1 containers: [435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c]
	I1028 17:10:04.522805  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:04.526174  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 17:10:04.526242  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 17:10:04.561918  110282 cri.go:89] found id: "623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d"
	I1028 17:10:04.561942  110282 cri.go:89] found id: ""
	I1028 17:10:04.561952  110282 logs.go:282] 1 containers: [623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d]
	I1028 17:10:04.562009  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:04.565636  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 17:10:04.565700  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 17:10:04.599837  110282 cri.go:89] found id: "e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1"
	I1028 17:10:04.599874  110282 cri.go:89] found id: ""
	I1028 17:10:04.599885  110282 logs.go:282] 1 containers: [e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1]
	I1028 17:10:04.599953  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:04.603442  110282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 17:10:04.603500  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 17:10:04.636798  110282 cri.go:89] found id: "6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc"
	I1028 17:10:04.636826  110282 cri.go:89] found id: ""
	I1028 17:10:04.636835  110282 logs.go:282] 1 containers: [6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc]
	I1028 17:10:04.636893  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:04.640521  110282 logs.go:123] Gathering logs for kube-controller-manager [e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1] ...
	I1028 17:10:04.640558  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1"
	I1028 17:10:04.706235  110282 logs.go:123] Gathering logs for container status ...
	I1028 17:10:04.706278  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 17:10:04.753427  110282 logs.go:123] Gathering logs for kubelet ...
	I1028 17:10:04.753462  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 17:10:04.814201  110282 logs.go:138] Found kubelet problem: Oct 28 17:08:23 addons-803184 kubelet[1630]: W1028 17:08:23.628918    1630 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-803184" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-803184' and this object
	W1028 17:10:04.814376  110282 logs.go:138] Found kubelet problem: Oct 28 17:08:23 addons-803184 kubelet[1630]: E1028 17:08:23.628981    1630 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-803184\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-803184' and this object" logger="UnhandledError"
	I1028 17:10:04.841985  110282 logs.go:123] Gathering logs for etcd [73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4] ...
	I1028 17:10:04.842029  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4"
	I1028 17:10:04.885980  110282 logs.go:123] Gathering logs for coredns [e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d] ...
	I1028 17:10:04.886020  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d"
	I1028 17:10:04.921279  110282 logs.go:123] Gathering logs for kube-scheduler [435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c] ...
	I1028 17:10:04.921311  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c"
	I1028 17:10:04.960982  110282 logs.go:123] Gathering logs for kube-proxy [623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d] ...
	I1028 17:10:04.961019  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d"
	I1028 17:10:04.994269  110282 logs.go:123] Gathering logs for dmesg ...
	I1028 17:10:04.994301  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 17:10:05.010012  110282 logs.go:123] Gathering logs for describe nodes ...
	I1028 17:10:05.010049  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 17:10:05.109317  110282 logs.go:123] Gathering logs for kube-apiserver [3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f] ...
	I1028 17:10:05.109352  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f"
	I1028 17:10:05.152289  110282 logs.go:123] Gathering logs for kindnet [6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc] ...
	I1028 17:10:05.152332  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc"
	I1028 17:10:05.188911  110282 logs.go:123] Gathering logs for CRI-O ...
	I1028 17:10:05.188947  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 17:10:05.268596  110282 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:05.268635  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 17:10:05.268738  110282 out.go:270] X Problems detected in kubelet:
	W1028 17:10:05.268759  110282 out.go:270]   Oct 28 17:08:23 addons-803184 kubelet[1630]: W1028 17:08:23.628918    1630 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-803184" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-803184' and this object
	W1028 17:10:05.268772  110282 out.go:270]   Oct 28 17:08:23 addons-803184 kubelet[1630]: E1028 17:08:23.628981    1630 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-803184\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-803184' and this object" logger="UnhandledError"
	I1028 17:10:05.268788  110282 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:05.268800  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:10:15.269390  110282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:10:15.283609  110282 api_server.go:72] duration metric: took 2m5.24908153s to wait for apiserver process to appear ...
	I1028 17:10:15.283644  110282 api_server.go:88] waiting for apiserver healthz status ...
	I1028 17:10:15.283685  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 17:10:15.283736  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 17:10:15.316858  110282 cri.go:89] found id: "3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f"
	I1028 17:10:15.316884  110282 cri.go:89] found id: ""
	I1028 17:10:15.316892  110282 logs.go:282] 1 containers: [3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f]
	I1028 17:10:15.316948  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:15.320309  110282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 17:10:15.320368  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 17:10:15.353440  110282 cri.go:89] found id: "73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4"
	I1028 17:10:15.353462  110282 cri.go:89] found id: ""
	I1028 17:10:15.353470  110282 logs.go:282] 1 containers: [73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4]
	I1028 17:10:15.353520  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:15.356974  110282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 17:10:15.357068  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 17:10:15.390721  110282 cri.go:89] found id: "e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d"
	I1028 17:10:15.390750  110282 cri.go:89] found id: ""
	I1028 17:10:15.390762  110282 logs.go:282] 1 containers: [e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d]
	I1028 17:10:15.390824  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:15.394234  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 17:10:15.394299  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 17:10:15.428340  110282 cri.go:89] found id: "435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c"
	I1028 17:10:15.428358  110282 cri.go:89] found id: ""
	I1028 17:10:15.428367  110282 logs.go:282] 1 containers: [435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c]
	I1028 17:10:15.428412  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:15.431826  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 17:10:15.431911  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 17:10:15.467161  110282 cri.go:89] found id: "623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d"
	I1028 17:10:15.467197  110282 cri.go:89] found id: ""
	I1028 17:10:15.467207  110282 logs.go:282] 1 containers: [623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d]
	I1028 17:10:15.467263  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:15.470856  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 17:10:15.470921  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 17:10:15.506667  110282 cri.go:89] found id: "e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1"
	I1028 17:10:15.506693  110282 cri.go:89] found id: ""
	I1028 17:10:15.506706  110282 logs.go:282] 1 containers: [e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1]
	I1028 17:10:15.506766  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:15.510174  110282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 17:10:15.510249  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 17:10:15.546325  110282 cri.go:89] found id: "6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc"
	I1028 17:10:15.546347  110282 cri.go:89] found id: ""
	I1028 17:10:15.546355  110282 logs.go:282] 1 containers: [6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc]
	I1028 17:10:15.546397  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:15.549866  110282 logs.go:123] Gathering logs for coredns [e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d] ...
	I1028 17:10:15.549896  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d"
	I1028 17:10:15.584618  110282 logs.go:123] Gathering logs for kube-proxy [623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d] ...
	I1028 17:10:15.584651  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d"
	I1028 17:10:15.617522  110282 logs.go:123] Gathering logs for kindnet [6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc] ...
	I1028 17:10:15.617554  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc"
	I1028 17:10:15.653068  110282 logs.go:123] Gathering logs for kubelet ...
	I1028 17:10:15.653105  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 17:10:15.705535  110282 logs.go:138] Found kubelet problem: Oct 28 17:08:23 addons-803184 kubelet[1630]: W1028 17:08:23.628918    1630 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-803184" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-803184' and this object
	W1028 17:10:15.705713  110282 logs.go:138] Found kubelet problem: Oct 28 17:08:23 addons-803184 kubelet[1630]: E1028 17:08:23.628981    1630 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-803184\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-803184' and this object" logger="UnhandledError"
	I1028 17:10:15.733544  110282 logs.go:123] Gathering logs for dmesg ...
	I1028 17:10:15.733592  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 17:10:15.751343  110282 logs.go:123] Gathering logs for describe nodes ...
	I1028 17:10:15.751382  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 17:10:15.888048  110282 logs.go:123] Gathering logs for kube-apiserver [3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f] ...
	I1028 17:10:15.888083  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f"
	I1028 17:10:15.931670  110282 logs.go:123] Gathering logs for etcd [73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4] ...
	I1028 17:10:15.931708  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4"
	I1028 17:10:15.974756  110282 logs.go:123] Gathering logs for CRI-O ...
	I1028 17:10:15.974789  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 17:10:16.048818  110282 logs.go:123] Gathering logs for container status ...
	I1028 17:10:16.048861  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 17:10:16.089724  110282 logs.go:123] Gathering logs for kube-scheduler [435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c] ...
	I1028 17:10:16.089759  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c"
	I1028 17:10:16.128936  110282 logs.go:123] Gathering logs for kube-controller-manager [e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1] ...
	I1028 17:10:16.128978  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1"
	I1028 17:10:16.185889  110282 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:16.185929  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 17:10:16.185995  110282 out.go:270] X Problems detected in kubelet:
	W1028 17:10:16.186009  110282 out.go:270]   Oct 28 17:08:23 addons-803184 kubelet[1630]: W1028 17:08:23.628918    1630 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-803184" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-803184' and this object
	W1028 17:10:16.186017  110282 out.go:270]   Oct 28 17:08:23 addons-803184 kubelet[1630]: E1028 17:08:23.628981    1630 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-803184\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-803184' and this object" logger="UnhandledError"
	I1028 17:10:16.186028  110282 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:16.186033  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:10:26.186741  110282 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1028 17:10:26.190858  110282 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1028 17:10:26.191838  110282 api_server.go:141] control plane version: v1.31.2
	I1028 17:10:26.191865  110282 api_server.go:131] duration metric: took 10.908213353s to wait for apiserver health ...
	I1028 17:10:26.191873  110282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 17:10:26.191894  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 17:10:26.191948  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 17:10:26.225575  110282 cri.go:89] found id: "3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f"
	I1028 17:10:26.225616  110282 cri.go:89] found id: ""
	I1028 17:10:26.225627  110282 logs.go:282] 1 containers: [3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f]
	I1028 17:10:26.225689  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:26.229192  110282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 17:10:26.229255  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 17:10:26.262556  110282 cri.go:89] found id: "73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4"
	I1028 17:10:26.262580  110282 cri.go:89] found id: ""
	I1028 17:10:26.262589  110282 logs.go:282] 1 containers: [73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4]
	I1028 17:10:26.262647  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:26.266736  110282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 17:10:26.266812  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 17:10:26.300967  110282 cri.go:89] found id: "e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d"
	I1028 17:10:26.300989  110282 cri.go:89] found id: ""
	I1028 17:10:26.300997  110282 logs.go:282] 1 containers: [e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d]
	I1028 17:10:26.301063  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:26.304956  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 17:10:26.305053  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 17:10:26.339588  110282 cri.go:89] found id: "435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c"
	I1028 17:10:26.339611  110282 cri.go:89] found id: ""
	I1028 17:10:26.339620  110282 logs.go:282] 1 containers: [435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c]
	I1028 17:10:26.339676  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:26.343202  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 17:10:26.343272  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 17:10:26.376768  110282 cri.go:89] found id: "623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d"
	I1028 17:10:26.376795  110282 cri.go:89] found id: ""
	I1028 17:10:26.376806  110282 logs.go:282] 1 containers: [623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d]
	I1028 17:10:26.376867  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:26.380729  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 17:10:26.380810  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 17:10:26.416048  110282 cri.go:89] found id: "e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1"
	I1028 17:10:26.416071  110282 cri.go:89] found id: ""
	I1028 17:10:26.416079  110282 logs.go:282] 1 containers: [e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1]
	I1028 17:10:26.416122  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:26.419553  110282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 17:10:26.419629  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 17:10:26.453985  110282 cri.go:89] found id: "6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc"
	I1028 17:10:26.454007  110282 cri.go:89] found id: ""
	I1028 17:10:26.454014  110282 logs.go:282] 1 containers: [6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc]
	I1028 17:10:26.454069  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:26.457409  110282 logs.go:123] Gathering logs for coredns [e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d] ...
	I1028 17:10:26.457438  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d"
	I1028 17:10:26.491555  110282 logs.go:123] Gathering logs for kube-controller-manager [e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1] ...
	I1028 17:10:26.491584  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1"
	I1028 17:10:26.546664  110282 logs.go:123] Gathering logs for kindnet [6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc] ...
	I1028 17:10:26.546736  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc"
	I1028 17:10:26.581345  110282 logs.go:123] Gathering logs for container status ...
	I1028 17:10:26.581377  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 17:10:26.623141  110282 logs.go:123] Gathering logs for kube-scheduler [435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c] ...
	I1028 17:10:26.623176  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c"
	I1028 17:10:26.664601  110282 logs.go:123] Gathering logs for kube-proxy [623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d] ...
	I1028 17:10:26.664638  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d"
	I1028 17:10:26.698284  110282 logs.go:123] Gathering logs for CRI-O ...
	I1028 17:10:26.698323  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 17:10:26.777242  110282 logs.go:123] Gathering logs for kubelet ...
	I1028 17:10:26.777287  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 17:10:26.829071  110282 logs.go:138] Found kubelet problem: Oct 28 17:08:23 addons-803184 kubelet[1630]: W1028 17:08:23.628918    1630 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-803184" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-803184' and this object
	W1028 17:10:26.829254  110282 logs.go:138] Found kubelet problem: Oct 28 17:08:23 addons-803184 kubelet[1630]: E1028 17:08:23.628981    1630 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-803184\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-803184' and this object" logger="UnhandledError"
	I1028 17:10:26.858029  110282 logs.go:123] Gathering logs for dmesg ...
	I1028 17:10:26.858071  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 17:10:26.875632  110282 logs.go:123] Gathering logs for describe nodes ...
	I1028 17:10:26.875675  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 17:10:26.975563  110282 logs.go:123] Gathering logs for kube-apiserver [3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f] ...
	I1028 17:10:26.975603  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f"
	I1028 17:10:27.019555  110282 logs.go:123] Gathering logs for etcd [73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4] ...
	I1028 17:10:27.019591  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4"
	I1028 17:10:27.066582  110282 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:27.066615  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 17:10:27.066676  110282 out.go:270] X Problems detected in kubelet:
	W1028 17:10:27.066689  110282 out.go:270]   Oct 28 17:08:23 addons-803184 kubelet[1630]: W1028 17:08:23.628918    1630 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-803184" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-803184' and this object
	W1028 17:10:27.066698  110282 out.go:270]   Oct 28 17:08:23 addons-803184 kubelet[1630]: E1028 17:08:23.628981    1630 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-803184\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-803184' and this object" logger="UnhandledError"
	I1028 17:10:27.066711  110282 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:27.066716  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:10:37.079264  110282 system_pods.go:59] 19 kube-system pods found
	I1028 17:10:37.079305  110282 system_pods.go:61] "amd-gpu-device-plugin-jhlpw" [f711d106-eb63-4b6b-8661-25cd70f4f3b1] Running
	I1028 17:10:37.079314  110282 system_pods.go:61] "coredns-7c65d6cfc9-mc8s8" [9f5e6a87-7e82-49cc-bea9-1975bf9e65dd] Running
	I1028 17:10:37.079320  110282 system_pods.go:61] "csi-hostpath-attacher-0" [da3d6207-32ff-44d2-b0ee-df10b36350ac] Running
	I1028 17:10:37.079326  110282 system_pods.go:61] "csi-hostpath-resizer-0" [7e68a1f2-b307-486c-ac3b-4c103de4e95c] Running
	I1028 17:10:37.079354  110282 system_pods.go:61] "csi-hostpathplugin-728fs" [d161e22d-e638-413a-aa21-c02a59e7f793] Running
	I1028 17:10:37.079360  110282 system_pods.go:61] "etcd-addons-803184" [a95b6003-b239-4852-b463-0fff9cd0f206] Running
	I1028 17:10:37.079365  110282 system_pods.go:61] "kindnet-hj2qh" [32e72145-ef94-4e95-b3f8-99108d471a86] Running
	I1028 17:10:37.079371  110282 system_pods.go:61] "kube-apiserver-addons-803184" [39798403-bdbb-47d9-89c1-768e79344f2b] Running
	I1028 17:10:37.079377  110282 system_pods.go:61] "kube-controller-manager-addons-803184" [11d701f0-9111-4089-8635-652492fc24a3] Running
	I1028 17:10:37.079384  110282 system_pods.go:61] "kube-ingress-dns-minikube" [079c2ef4-da73-455a-90bb-fe1a00f5ef5d] Running
	I1028 17:10:37.079401  110282 system_pods.go:61] "kube-proxy-rlsxn" [c8571a1a-da60-4e3a-80b6-4739a4f2b0d7] Running
	I1028 17:10:37.079407  110282 system_pods.go:61] "kube-scheduler-addons-803184" [8b63e9e0-0128-44b5-8ca5-f90c9ea46b5e] Running
	I1028 17:10:37.079414  110282 system_pods.go:61] "metrics-server-84c5f94fbc-674zg" [37927340-66ab-4951-bd4b-59b0e0d01812] Running
	I1028 17:10:37.079422  110282 system_pods.go:61] "nvidia-device-plugin-daemonset-z7q9t" [29592f17-9aa8-4d19-b8d1-dcb2278980ef] Running
	I1028 17:10:37.079428  110282 system_pods.go:61] "registry-66c9cd494c-67lgb" [9af05f14-ce81-44bb-97d1-37dedf7c187c] Running
	I1028 17:10:37.079434  110282 system_pods.go:61] "registry-proxy-nbdps" [cd42d863-c294-464d-b7cd-95396c429181] Running
	I1028 17:10:37.079440  110282 system_pods.go:61] "snapshot-controller-56fcc65765-cdh9r" [865ae345-a0e7-417d-9c96-5544c2832d7e] Running
	I1028 17:10:37.079447  110282 system_pods.go:61] "snapshot-controller-56fcc65765-vwwxr" [dc9caf29-d5cc-4123-96ee-d69a2da2e706] Running
	I1028 17:10:37.079453  110282 system_pods.go:61] "storage-provisioner" [a0b9c49a-8d86-4f02-84fb-10f963133047] Running
	I1028 17:10:37.079460  110282 system_pods.go:74] duration metric: took 10.887580531s to wait for pod list to return data ...
	I1028 17:10:37.079477  110282 default_sa.go:34] waiting for default service account to be created ...
	I1028 17:10:37.082069  110282 default_sa.go:45] found service account: "default"
	I1028 17:10:37.082097  110282 default_sa.go:55] duration metric: took 2.612904ms for default service account to be created ...
	I1028 17:10:37.082115  110282 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 17:10:37.090408  110282 system_pods.go:86] 19 kube-system pods found
	I1028 17:10:37.090450  110282 system_pods.go:89] "amd-gpu-device-plugin-jhlpw" [f711d106-eb63-4b6b-8661-25cd70f4f3b1] Running
	I1028 17:10:37.090459  110282 system_pods.go:89] "coredns-7c65d6cfc9-mc8s8" [9f5e6a87-7e82-49cc-bea9-1975bf9e65dd] Running
	I1028 17:10:37.090465  110282 system_pods.go:89] "csi-hostpath-attacher-0" [da3d6207-32ff-44d2-b0ee-df10b36350ac] Running
	I1028 17:10:37.090470  110282 system_pods.go:89] "csi-hostpath-resizer-0" [7e68a1f2-b307-486c-ac3b-4c103de4e95c] Running
	I1028 17:10:37.090476  110282 system_pods.go:89] "csi-hostpathplugin-728fs" [d161e22d-e638-413a-aa21-c02a59e7f793] Running
	I1028 17:10:37.090481  110282 system_pods.go:89] "etcd-addons-803184" [a95b6003-b239-4852-b463-0fff9cd0f206] Running
	I1028 17:10:37.090486  110282 system_pods.go:89] "kindnet-hj2qh" [32e72145-ef94-4e95-b3f8-99108d471a86] Running
	I1028 17:10:37.090493  110282 system_pods.go:89] "kube-apiserver-addons-803184" [39798403-bdbb-47d9-89c1-768e79344f2b] Running
	I1028 17:10:37.090499  110282 system_pods.go:89] "kube-controller-manager-addons-803184" [11d701f0-9111-4089-8635-652492fc24a3] Running
	I1028 17:10:37.090507  110282 system_pods.go:89] "kube-ingress-dns-minikube" [079c2ef4-da73-455a-90bb-fe1a00f5ef5d] Running
	I1028 17:10:37.090512  110282 system_pods.go:89] "kube-proxy-rlsxn" [c8571a1a-da60-4e3a-80b6-4739a4f2b0d7] Running
	I1028 17:10:37.090519  110282 system_pods.go:89] "kube-scheduler-addons-803184" [8b63e9e0-0128-44b5-8ca5-f90c9ea46b5e] Running
	I1028 17:10:37.090533  110282 system_pods.go:89] "metrics-server-84c5f94fbc-674zg" [37927340-66ab-4951-bd4b-59b0e0d01812] Running
	I1028 17:10:37.090544  110282 system_pods.go:89] "nvidia-device-plugin-daemonset-z7q9t" [29592f17-9aa8-4d19-b8d1-dcb2278980ef] Running
	I1028 17:10:37.090554  110282 system_pods.go:89] "registry-66c9cd494c-67lgb" [9af05f14-ce81-44bb-97d1-37dedf7c187c] Running
	I1028 17:10:37.090561  110282 system_pods.go:89] "registry-proxy-nbdps" [cd42d863-c294-464d-b7cd-95396c429181] Running
	I1028 17:10:37.090568  110282 system_pods.go:89] "snapshot-controller-56fcc65765-cdh9r" [865ae345-a0e7-417d-9c96-5544c2832d7e] Running
	I1028 17:10:37.090575  110282 system_pods.go:89] "snapshot-controller-56fcc65765-vwwxr" [dc9caf29-d5cc-4123-96ee-d69a2da2e706] Running
	I1028 17:10:37.090583  110282 system_pods.go:89] "storage-provisioner" [a0b9c49a-8d86-4f02-84fb-10f963133047] Running
	I1028 17:10:37.090595  110282 system_pods.go:126] duration metric: took 8.471887ms to wait for k8s-apps to be running ...
	I1028 17:10:37.090608  110282 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 17:10:37.090676  110282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:10:37.102389  110282 system_svc.go:56] duration metric: took 11.771068ms WaitForService to wait for kubelet
	I1028 17:10:37.102427  110282 kubeadm.go:582] duration metric: took 2m27.067901172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:10:37.102457  110282 node_conditions.go:102] verifying NodePressure condition ...
	I1028 17:10:37.105299  110282 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1028 17:10:37.105336  110282 node_conditions.go:123] node cpu capacity is 8
	I1028 17:10:37.105351  110282 node_conditions.go:105] duration metric: took 2.888292ms to run NodePressure ...
	I1028 17:10:37.105364  110282 start.go:241] waiting for startup goroutines ...
	I1028 17:10:37.105371  110282 start.go:246] waiting for cluster config update ...
	I1028 17:10:37.105388  110282 start.go:255] writing updated cluster config ...
	I1028 17:10:37.105681  110282 ssh_runner.go:195] Run: rm -f paused
	I1028 17:10:37.154349  110282 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 17:10:37.156332  110282 out.go:177] * Done! kubectl is now configured to use "addons-803184" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 17:12:04 addons-803184 crio[1033]: time="2024-10-28 17:12:04.617925220Z" level=info msg="Removed pod sandbox: 957e45e09c67c0e35c7d0d63cdc2a7d2ff9ace91648670e9463ba0ce331edd3d" id=10cdde84-f594-4d66-a859-afe6e61fe12f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 17:12:04 addons-803184 crio[1033]: time="2024-10-28 17:12:04.618380221Z" level=info msg="Stopping pod sandbox: 6f3939936f7765dfa32b52282697c15fdd66490483fd5a00684f385e8c72a606" id=55ee144a-40fd-407f-989f-2ffbe4fdaa2a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 17:12:04 addons-803184 crio[1033]: time="2024-10-28 17:12:04.618407746Z" level=info msg="Stopped pod sandbox (already stopped): 6f3939936f7765dfa32b52282697c15fdd66490483fd5a00684f385e8c72a606" id=55ee144a-40fd-407f-989f-2ffbe4fdaa2a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 17:12:04 addons-803184 crio[1033]: time="2024-10-28 17:12:04.618660297Z" level=info msg="Removing pod sandbox: 6f3939936f7765dfa32b52282697c15fdd66490483fd5a00684f385e8c72a606" id=34208126-0204-412b-a6d3-566daf14cb3a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 17:12:04 addons-803184 crio[1033]: time="2024-10-28 17:12:04.624614770Z" level=info msg="Removed pod sandbox: 6f3939936f7765dfa32b52282697c15fdd66490483fd5a00684f385e8c72a606" id=34208126-0204-412b-a6d3-566daf14cb3a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 17:12:04 addons-803184 crio[1033]: time="2024-10-28 17:12:04.625141678Z" level=info msg="Stopping pod sandbox: 7ae508787aa3481550f9e0cc78e3bbdaff8655730c095d8c10fc526b04e3dc9f" id=f4ac848d-1688-44ca-b7ad-5501614a1b66 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 17:12:04 addons-803184 crio[1033]: time="2024-10-28 17:12:04.625170191Z" level=info msg="Stopped pod sandbox (already stopped): 7ae508787aa3481550f9e0cc78e3bbdaff8655730c095d8c10fc526b04e3dc9f" id=f4ac848d-1688-44ca-b7ad-5501614a1b66 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 17:12:04 addons-803184 crio[1033]: time="2024-10-28 17:12:04.625439274Z" level=info msg="Removing pod sandbox: 7ae508787aa3481550f9e0cc78e3bbdaff8655730c095d8c10fc526b04e3dc9f" id=cea6a7b8-6127-4278-a543-60834b0829f0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 17:12:04 addons-803184 crio[1033]: time="2024-10-28 17:12:04.631402214Z" level=info msg="Removed pod sandbox: 7ae508787aa3481550f9e0cc78e3bbdaff8655730c095d8c10fc526b04e3dc9f" id=cea6a7b8-6127-4278-a543-60834b0829f0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 17:12:04 addons-803184 crio[1033]: time="2024-10-28 17:12:04.631756230Z" level=info msg="Stopping pod sandbox: b8958e8f5281a44ffabe1ac55220b41ec32796af4807184aecbd5663d209dd96" id=affda649-6862-44f7-9df8-c40e60805acd name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 17:12:04 addons-803184 crio[1033]: time="2024-10-28 17:12:04.631781916Z" level=info msg="Stopped pod sandbox (already stopped): b8958e8f5281a44ffabe1ac55220b41ec32796af4807184aecbd5663d209dd96" id=affda649-6862-44f7-9df8-c40e60805acd name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 17:12:04 addons-803184 crio[1033]: time="2024-10-28 17:12:04.632082543Z" level=info msg="Removing pod sandbox: b8958e8f5281a44ffabe1ac55220b41ec32796af4807184aecbd5663d209dd96" id=0029cb15-76b1-4077-b6a8-66296feea548 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 17:12:04 addons-803184 crio[1033]: time="2024-10-28 17:12:04.637709364Z" level=info msg="Removed pod sandbox: b8958e8f5281a44ffabe1ac55220b41ec32796af4807184aecbd5663d209dd96" id=0029cb15-76b1-4077-b6a8-66296feea548 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 17:13:50 addons-803184 crio[1033]: time="2024-10-28 17:13:50.198879885Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-hr2bl/POD" id=71507ff8-ffb8-4060-997c-589d02df9e8a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 28 17:13:50 addons-803184 crio[1033]: time="2024-10-28 17:13:50.198962503Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 28 17:13:50 addons-803184 crio[1033]: time="2024-10-28 17:13:50.239538323Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-hr2bl Namespace:default ID:c98e86b1568193202c6b56dcb59a83376e94e77aa4f0dc51a95e00358aed3bd6 UID:dca9ecaf-fe7e-4b15-9068-edc992541e69 NetNS:/var/run/netns/ad702e70-c36e-4eb5-a9c3-5de1281d9c14 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 28 17:13:50 addons-803184 crio[1033]: time="2024-10-28 17:13:50.239577298Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-hr2bl to CNI network \"kindnet\" (type=ptp)"
	Oct 28 17:13:50 addons-803184 crio[1033]: time="2024-10-28 17:13:50.249189401Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-hr2bl Namespace:default ID:c98e86b1568193202c6b56dcb59a83376e94e77aa4f0dc51a95e00358aed3bd6 UID:dca9ecaf-fe7e-4b15-9068-edc992541e69 NetNS:/var/run/netns/ad702e70-c36e-4eb5-a9c3-5de1281d9c14 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 28 17:13:50 addons-803184 crio[1033]: time="2024-10-28 17:13:50.249374109Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-hr2bl for CNI network kindnet (type=ptp)"
	Oct 28 17:13:50 addons-803184 crio[1033]: time="2024-10-28 17:13:50.251982118Z" level=info msg="Ran pod sandbox c98e86b1568193202c6b56dcb59a83376e94e77aa4f0dc51a95e00358aed3bd6 with infra container: default/hello-world-app-55bf9c44b4-hr2bl/POD" id=71507ff8-ffb8-4060-997c-589d02df9e8a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 28 17:13:50 addons-803184 crio[1033]: time="2024-10-28 17:13:50.253162017Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d89481fd-614e-483e-8d53-f235e3f489ad name=/runtime.v1.ImageService/ImageStatus
	Oct 28 17:13:50 addons-803184 crio[1033]: time="2024-10-28 17:13:50.253362569Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=d89481fd-614e-483e-8d53-f235e3f489ad name=/runtime.v1.ImageService/ImageStatus
	Oct 28 17:13:50 addons-803184 crio[1033]: time="2024-10-28 17:13:50.253907399Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=0742912b-4023-4264-8fdf-10ba0ec34a1f name=/runtime.v1.ImageService/PullImage
	Oct 28 17:13:50 addons-803184 crio[1033]: time="2024-10-28 17:13:50.263316647Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 28 17:13:51 addons-803184 crio[1033]: time="2024-10-28 17:13:51.295725881Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5fe124ad0d846       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago       Running             nginx                     0                   a5bf6a2891ba6       nginx
	47487a2d71c3f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   d93d97479c706       busybox
	ec97e84f5438f       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             4 minutes ago       Running             controller                0                   5f487d7817107       ingress-nginx-controller-5f85ff4588-qfhpf
	df81cd12b25ac       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   76c2e84ba02fe       metrics-server-84c5f94fbc-674zg
	48009f6110960       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   35c23dc4d7ee0       local-path-provisioner-86d989889c-xsbnr
	236dccdf82ebc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   5 minutes ago       Exited              patch                     0                   ed5b57735d1d6       ingress-nginx-admission-patch-prp8k
	eb99d0d6042bf       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago       Running             minikube-ingress-dns      0                   fe9f57301d9e5       kube-ingress-dns-minikube
	4edb24fd79a1c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   5 minutes ago       Exited              create                    0                   d5db8fd49eee7       ingress-nginx-admission-create-xqrnj
	812751f5e2a24       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   7497107d3bf1e       storage-provisioner
	e00de5529feb3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   8ed6236d4db5d       coredns-7c65d6cfc9-mc8s8
	6b73547e89dce       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                                             5 minutes ago       Running             kindnet-cni               0                   f05cc2dabc46b       kindnet-hj2qh
	623595caf3621       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             5 minutes ago       Running             kube-proxy                0                   0e87349546e7b       kube-proxy-rlsxn
	3ae549dfb8f03       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             5 minutes ago       Running             kube-apiserver            0                   376b526902bbf       kube-apiserver-addons-803184
	e146d6b67329b       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             5 minutes ago       Running             kube-controller-manager   0                   042db5d4b535d       kube-controller-manager-addons-803184
	435c4410be526       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             5 minutes ago       Running             kube-scheduler            0                   00173b9182265       kube-scheduler-addons-803184
	73de1b918a7a5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   cac6a432d93bb       etcd-addons-803184
	
	
	==> coredns [e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d] <==
	[INFO] 10.244.0.18:34610 - 11755 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000064177s
	[INFO] 10.244.0.18:43599 - 50204 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005857594s
	[INFO] 10.244.0.18:43599 - 50432 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.007239333s
	[INFO] 10.244.0.18:45817 - 41780 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007004746s
	[INFO] 10.244.0.18:45817 - 42039 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007053558s
	[INFO] 10.244.0.18:33593 - 10116 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00587392s
	[INFO] 10.244.0.18:33593 - 10344 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006662611s
	[INFO] 10.244.0.18:57312 - 55423 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00009205s
	[INFO] 10.244.0.18:57312 - 55136 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000146158s
	[INFO] 10.244.0.22:39175 - 30683 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000181174s
	[INFO] 10.244.0.22:57941 - 17921 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000241168s
	[INFO] 10.244.0.22:39755 - 28695 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00016942s
	[INFO] 10.244.0.22:53390 - 17121 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000254421s
	[INFO] 10.244.0.22:33757 - 52435 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009425s
	[INFO] 10.244.0.22:37986 - 46939 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000167377s
	[INFO] 10.244.0.22:50270 - 1505 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.010778355s
	[INFO] 10.244.0.22:37050 - 55336 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.010926821s
	[INFO] 10.244.0.22:47906 - 24137 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.01020189s
	[INFO] 10.244.0.22:43196 - 38204 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.010250558s
	[INFO] 10.244.0.22:54173 - 43823 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008688936s
	[INFO] 10.244.0.22:38605 - 63551 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008816568s
	[INFO] 10.244.0.22:60385 - 7058 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000834358s
	[INFO] 10.244.0.22:50769 - 55991 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000954172s
	[INFO] 10.244.0.25:57521 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000259007s
	[INFO] 10.244.0.25:48577 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145593s
	
	
	==> describe nodes <==
	Name:               addons-803184
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-803184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=addons-803184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T17_08_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-803184
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:08:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-803184
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:13:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:11:39 +0000   Mon, 28 Oct 2024 17:08:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:11:39 +0000   Mon, 28 Oct 2024 17:08:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:11:39 +0000   Mon, 28 Oct 2024 17:08:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:11:39 +0000   Mon, 28 Oct 2024 17:08:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-803184
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a2b47af00984e389c68af9dc7a29c31
	  System UUID:                bf2f7dbd-aeea-4147-ba5b-eea51abda43d
	  Boot ID:                    9ca5ee1d-76d3-40f6-894f-a30303f688cc
	  Kernel Version:             5.15.0-1070-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  default                     hello-world-app-55bf9c44b4-hr2bl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-qfhpf    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         5m36s
	  kube-system                 coredns-7c65d6cfc9-mc8s8                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m41s
	  kube-system                 etcd-addons-803184                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m47s
	  kube-system                 kindnet-hj2qh                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m41s
	  kube-system                 kube-apiserver-addons-803184                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-controller-manager-addons-803184        200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-proxy-rlsxn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  kube-system                 kube-scheduler-addons-803184                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 metrics-server-84c5f94fbc-674zg              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         5m37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  local-path-storage          local-path-provisioner-86d989889c-xsbnr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m37s                  kube-proxy       
	  Normal   Starting                 5m52s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m52s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m52s (x8 over 5m52s)  kubelet          Node addons-803184 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m52s (x8 over 5m52s)  kubelet          Node addons-803184 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m52s (x7 over 5m52s)  kubelet          Node addons-803184 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m47s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m47s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m47s                  kubelet          Node addons-803184 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m47s                  kubelet          Node addons-803184 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m47s                  kubelet          Node addons-803184 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m42s                  node-controller  Node addons-803184 event: Registered Node addons-803184 in Controller
	  Normal   NodeReady                5m28s                  kubelet          Node addons-803184 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 50 ac a9 60 41 08 06
	[Oct28 16:57] IPv4: martian source 10.244.0.1 from 10.244.0.47, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 8c fc bf 5e 5d 08 06
	[Oct28 16:58] IPv4: martian source 10.244.0.1 from 10.244.0.48, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 ca 3f 1e c5 5a 08 06
	[ +23.638784] IPv4: martian source 10.244.0.1 from 10.244.0.49, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 e8 fb 71 c4 cc 08 06
	[Oct28 16:59] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 e9 c4 bd 3e 0d 08 06
	[ +22.900129] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e 49 91 d3 37 da 08 06
	[Oct28 17:11] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[  +1.015600] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[  +2.015817] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[  +4.127681] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[  +8.195365] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[Oct28 17:12] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[ +32.253574] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	
	
	==> etcd [73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4] <==
	{"level":"warn","ts":"2024-10-28T17:08:12.733268Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.994814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4096"}
	{"level":"info","ts":"2024-10-28T17:08:12.733319Z","caller":"traceutil/trace.go:171","msg":"trace[1954612271] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:409; }","duration":"195.058487ms","start":"2024-10-28T17:08:12.538250Z","end":"2024-10-28T17:08:12.733308Z","steps":["trace[1954612271] 'agreement among raft nodes before linearized reading'  (duration: 194.958626ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:08:12.740649Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.294141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-10-28T17:08:12.835507Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.286876ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-rlsxn\" ","response":"range_response_count:1 size:4833"}
	{"level":"info","ts":"2024-10-28T17:08:12.838767Z","caller":"traceutil/trace.go:171","msg":"trace[1719793517] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-rlsxn; range_end:; response_count:1; response_revision:415; }","duration":"104.556054ms","start":"2024-10-28T17:08:12.734191Z","end":"2024-10-28T17:08:12.838748Z","steps":["trace[1719793517] 'agreement among raft nodes before linearized reading'  (duration: 101.209093ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:08:12.836884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.439315ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-803184\" ","response":"range_response_count:1 size:5655"}
	{"level":"info","ts":"2024-10-28T17:08:12.839174Z","caller":"traceutil/trace.go:171","msg":"trace[640734739] range","detail":"{range_begin:/registry/minions/addons-803184; range_end:; response_count:1; response_revision:415; }","duration":"104.731509ms","start":"2024-10-28T17:08:12.734427Z","end":"2024-10-28T17:08:12.839158Z","steps":["trace[640734739] 'agreement among raft nodes before linearized reading'  (duration: 102.414378ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:08:12.837072Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.687466ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T17:08:12.839547Z","caller":"traceutil/trace.go:171","msg":"trace[779157548] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:415; }","duration":"105.158019ms","start":"2024-10-28T17:08:12.734377Z","end":"2024-10-28T17:08:12.839535Z","steps":["trace[779157548] 'agreement among raft nodes before linearized reading'  (duration: 102.67492ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:08:12.837112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.903098ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T17:08:12.839931Z","caller":"traceutil/trace.go:171","msg":"trace[2084882310] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:415; }","duration":"105.718598ms","start":"2024-10-28T17:08:12.734201Z","end":"2024-10-28T17:08:12.839920Z","steps":["trace[2084882310] 'agreement among raft nodes before linearized reading'  (duration: 102.890249ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:08:12.842441Z","caller":"traceutil/trace.go:171","msg":"trace[734434833] range","detail":"{range_begin:/registry/daemonsets/kube-system/amd-gpu-device-plugin; range_end:; response_count:0; response_revision:414; }","duration":"291.759872ms","start":"2024-10-28T17:08:12.538333Z","end":"2024-10-28T17:08:12.830093Z","steps":["trace[734434833] 'agreement among raft nodes before linearized reading'  (duration: 202.275165ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:08:12.842512Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T17:08:12.538313Z","time spent":"304.176827ms","remote":"127.0.0.1:44962","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":29,"request content":"key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" "}
	{"level":"info","ts":"2024-10-28T17:08:13.245277Z","caller":"traceutil/trace.go:171","msg":"trace[1499117157] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"101.046866ms","start":"2024-10-28T17:08:13.144211Z","end":"2024-10-28T17:08:13.245258Z","steps":["trace[1499117157] 'process raft request'  (duration: 97.479313ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:08:13.547721Z","caller":"traceutil/trace.go:171","msg":"trace[1692226229] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"115.57696ms","start":"2024-10-28T17:08:13.432124Z","end":"2024-10-28T17:08:13.547701Z","steps":["trace[1692226229] 'process raft request'  (duration: 113.493556ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:08:13.548851Z","caller":"traceutil/trace.go:171","msg":"trace[1209363146] linearizableReadLoop","detail":"{readStateIndex:464; appliedIndex:460; }","duration":"102.101129ms","start":"2024-10-28T17:08:13.446735Z","end":"2024-10-28T17:08:13.548836Z","steps":["trace[1209363146] 'read index received'  (duration: 98.894041ms)","trace[1209363146] 'applied index is now lower than readState.Index'  (duration: 3.206309ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T17:08:13.549109Z","caller":"traceutil/trace.go:171","msg":"trace[13686491] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"114.15528ms","start":"2024-10-28T17:08:13.434942Z","end":"2024-10-28T17:08:13.549098Z","steps":["trace[13686491] 'process raft request'  (duration: 113.680799ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:08:13.549333Z","caller":"traceutil/trace.go:171","msg":"trace[1084945865] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"104.646427ms","start":"2024-10-28T17:08:13.444668Z","end":"2024-10-28T17:08:13.549314Z","steps":["trace[1084945865] 'process raft request'  (duration: 104.018849ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:08:13.549502Z","caller":"traceutil/trace.go:171","msg":"trace[924302981] transaction","detail":"{read_only:false; response_revision:451; number_of_response:1; }","duration":"102.950687ms","start":"2024-10-28T17:08:13.446544Z","end":"2024-10-28T17:08:13.549494Z","steps":["trace[924302981] 'process raft request'  (duration: 102.22335ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:08:13.549749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.001409ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-10-28T17:08:13.549809Z","caller":"traceutil/trace.go:171","msg":"trace[236314811] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:452; }","duration":"103.069302ms","start":"2024-10-28T17:08:13.446730Z","end":"2024-10-28T17:08:13.549799Z","steps":["trace[236314811] 'agreement among raft nodes before linearized reading'  (duration: 102.982414ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:09:22.996485Z","caller":"traceutil/trace.go:171","msg":"trace[1224421849] transaction","detail":"{read_only:false; response_revision:1147; number_of_response:1; }","duration":"113.666334ms","start":"2024-10-28T17:09:22.882799Z","end":"2024-10-28T17:09:22.996466Z","steps":["trace[1224421849] 'process raft request'  (duration: 113.569703ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:09:23.184274Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.888879ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-create-prslp\" ","response":"range_response_count:1 size:3937"}
	{"level":"info","ts":"2024-10-28T17:09:23.184355Z","caller":"traceutil/trace.go:171","msg":"trace[1664509026] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-create-prslp; range_end:; response_count:1; response_revision:1148; }","duration":"129.978957ms","start":"2024-10-28T17:09:23.054360Z","end":"2024-10-28T17:09:23.184339Z","steps":["trace[1664509026] 'range keys from in-memory index tree'  (duration: 129.767424ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:09:33.948521Z","caller":"traceutil/trace.go:171","msg":"trace[1215977858] transaction","detail":"{read_only:false; response_revision:1216; number_of_response:1; }","duration":"156.152471ms","start":"2024-10-28T17:09:33.792344Z","end":"2024-10-28T17:09:33.948496Z","steps":["trace[1215977858] 'process raft request'  (duration: 98.894578ms)","trace[1215977858] 'compare'  (duration: 56.728244ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:13:51 up 56 min,  0 users,  load average: 0.56, 1.05, 1.02
	Linux addons-803184 5.15.0-1070-gcp #78~20.04.1-Ubuntu SMP Wed Oct 9 22:05:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc] <==
	I1028 17:11:43.333165       1 main.go:300] handling current node
	I1028 17:11:53.330988       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:11:53.331051       1 main.go:300] handling current node
	I1028 17:12:03.335870       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:12:03.335905       1 main.go:300] handling current node
	I1028 17:12:13.330984       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:12:13.331019       1 main.go:300] handling current node
	I1028 17:12:23.333661       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:12:23.333704       1 main.go:300] handling current node
	I1028 17:12:33.337844       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:12:33.337891       1 main.go:300] handling current node
	I1028 17:12:43.339957       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:12:43.339994       1 main.go:300] handling current node
	I1028 17:12:53.331902       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:12:53.331940       1 main.go:300] handling current node
	I1028 17:13:03.340033       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:13:03.340067       1 main.go:300] handling current node
	I1028 17:13:13.331074       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:13:13.331121       1 main.go:300] handling current node
	I1028 17:13:23.331887       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:13:23.331920       1 main.go:300] handling current node
	I1028 17:13:33.339996       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:13:33.340031       1 main.go:300] handling current node
	I1028 17:13:43.335908       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:13:43.335941       1 main.go:300] handling current node
	
	
	==> kube-apiserver [3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f] <==
	 > logger="UnhandledError"
	E1028 17:10:04.249071       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.218.115:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.218.115:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.218.115:443: connect: connection refused" logger="UnhandledError"
	I1028 17:10:04.280638       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1028 17:10:47.843425       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52348: use of closed network connection
	E1028 17:10:48.008478       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52380: use of closed network connection
	I1028 17:10:56.981174       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.243.234"}
	I1028 17:11:27.391726       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1028 17:11:28.409463       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1028 17:11:29.660195       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1028 17:11:30.051527       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1028 17:11:30.251109       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.97.116"}
	I1028 17:11:53.210874       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:11:53.211032       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:11:53.225176       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:11:53.225226       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:11:53.227091       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:11:53.227133       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:11:53.239219       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:11:53.239269       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:11:53.253710       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:11:53.253849       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1028 17:11:54.228238       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1028 17:11:54.253957       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1028 17:11:54.360339       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1028 17:13:50.042053       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.240.197"}
	
	
	==> kube-controller-manager [e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1] <==
	E1028 17:12:12.536686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:12:14.766700       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:12:14.766741       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:12:26.743310       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:12:26.743353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:12:28.346237       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:12:28.346275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:12:37.701281       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:12:37.701344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:12:44.882102       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:12:44.882146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:13:00.441632       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:13:00.441673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:13:03.483171       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:13:03.483210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:13:19.092134       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:13:19.092178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:13:32.427576       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:13:32.427621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:13:39.848807       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:13:39.848849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1028 17:13:49.897378       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.76633ms"
	I1028 17:13:49.901379       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="3.952504ms"
	I1028 17:13:49.901461       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="42.641µs"
	I1028 17:13:49.905532       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="46.168µs"
	
	
	==> kube-proxy [623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d] <==
	I1028 17:08:13.236451       1 server_linux.go:66] "Using iptables proxy"
	I1028 17:08:14.144186       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1028 17:08:14.144285       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 17:08:14.441347       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1028 17:08:14.441498       1 server_linux.go:169] "Using iptables Proxier"
	I1028 17:08:14.444289       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 17:08:14.444876       1 server.go:483] "Version info" version="v1.31.2"
	I1028 17:08:14.445171       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 17:08:14.446715       1 config.go:199] "Starting service config controller"
	I1028 17:08:14.446793       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 17:08:14.446852       1 config.go:105] "Starting endpoint slice config controller"
	I1028 17:08:14.448135       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 17:08:14.446891       1 config.go:328] "Starting node config controller"
	I1028 17:08:14.448241       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 17:08:14.547558       1 shared_informer.go:320] Caches are synced for service config
	I1028 17:08:14.549043       1 shared_informer.go:320] Caches are synced for node config
	I1028 17:08:14.549193       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c] <==
	W1028 17:08:02.144871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 17:08:02.144889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:02.144934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 17:08:02.144956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:02.950479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 17:08:02.950523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:02.965016       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 17:08:02.965052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:03.015547       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 17:08:03.015590       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 17:08:03.015635       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 17:08:03.015667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:03.022111       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 17:08:03.022157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:03.076513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 17:08:03.076562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:03.076513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 17:08:03.076605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:03.079760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 17:08:03.079811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:03.151940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 17:08:03.151982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:03.179325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 17:08:03.179370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1028 17:08:04.741621       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 17:13:44 addons-803184 kubelet[1630]: E1028 17:13:44.601743    1630 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135624601494464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599399,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:13:44 addons-803184 kubelet[1630]: E1028 17:13:44.601785    1630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135624601494464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599399,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: E1028 17:13:49.896455    1630 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d161e22d-e638-413a-aa21-c02a59e7f793" containerName="hostpath"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: E1028 17:13:49.896502    1630 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da3d6207-32ff-44d2-b0ee-df10b36350ac" containerName="csi-attacher"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: E1028 17:13:49.896512    1630 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d161e22d-e638-413a-aa21-c02a59e7f793" containerName="node-driver-registrar"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: E1028 17:13:49.896521    1630 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d161e22d-e638-413a-aa21-c02a59e7f793" containerName="liveness-probe"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: E1028 17:13:49.896532    1630 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c7ceff62-7918-4d9e-bb94-6e5ee2b2777b" containerName="task-pv-container"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: E1028 17:13:49.896544    1630 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e68a1f2-b307-486c-ac3b-4c103de4e95c" containerName="csi-resizer"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: E1028 17:13:49.896555    1630 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc9caf29-d5cc-4123-96ee-d69a2da2e706" containerName="volume-snapshot-controller"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: E1028 17:13:49.896564    1630 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="865ae345-a0e7-417d-9c96-5544c2832d7e" containerName="volume-snapshot-controller"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: E1028 17:13:49.896572    1630 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d161e22d-e638-413a-aa21-c02a59e7f793" containerName="csi-external-health-monitor-controller"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: E1028 17:13:49.896584    1630 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d161e22d-e638-413a-aa21-c02a59e7f793" containerName="csi-provisioner"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: E1028 17:13:49.896592    1630 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d161e22d-e638-413a-aa21-c02a59e7f793" containerName="csi-snapshotter"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: I1028 17:13:49.896640    1630 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc9caf29-d5cc-4123-96ee-d69a2da2e706" containerName="volume-snapshot-controller"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: I1028 17:13:49.896649    1630 memory_manager.go:354] "RemoveStaleState removing state" podUID="d161e22d-e638-413a-aa21-c02a59e7f793" containerName="liveness-probe"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: I1028 17:13:49.896656    1630 memory_manager.go:354] "RemoveStaleState removing state" podUID="d161e22d-e638-413a-aa21-c02a59e7f793" containerName="csi-provisioner"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: I1028 17:13:49.896663    1630 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7ceff62-7918-4d9e-bb94-6e5ee2b2777b" containerName="task-pv-container"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: I1028 17:13:49.896671    1630 memory_manager.go:354] "RemoveStaleState removing state" podUID="865ae345-a0e7-417d-9c96-5544c2832d7e" containerName="volume-snapshot-controller"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: I1028 17:13:49.896680    1630 memory_manager.go:354] "RemoveStaleState removing state" podUID="d161e22d-e638-413a-aa21-c02a59e7f793" containerName="csi-snapshotter"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: I1028 17:13:49.896687    1630 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e68a1f2-b307-486c-ac3b-4c103de4e95c" containerName="csi-resizer"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: I1028 17:13:49.896694    1630 memory_manager.go:354] "RemoveStaleState removing state" podUID="d161e22d-e638-413a-aa21-c02a59e7f793" containerName="node-driver-registrar"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: I1028 17:13:49.896702    1630 memory_manager.go:354] "RemoveStaleState removing state" podUID="d161e22d-e638-413a-aa21-c02a59e7f793" containerName="csi-external-health-monitor-controller"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: I1028 17:13:49.896710    1630 memory_manager.go:354] "RemoveStaleState removing state" podUID="da3d6207-32ff-44d2-b0ee-df10b36350ac" containerName="csi-attacher"
	Oct 28 17:13:49 addons-803184 kubelet[1630]: I1028 17:13:49.896717    1630 memory_manager.go:354] "RemoveStaleState removing state" podUID="d161e22d-e638-413a-aa21-c02a59e7f793" containerName="hostpath"
	Oct 28 17:13:50 addons-803184 kubelet[1630]: I1028 17:13:50.069837    1630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29zb7\" (UniqueName: \"kubernetes.io/projected/dca9ecaf-fe7e-4b15-9068-edc992541e69-kube-api-access-29zb7\") pod \"hello-world-app-55bf9c44b4-hr2bl\" (UID: \"dca9ecaf-fe7e-4b15-9068-edc992541e69\") " pod="default/hello-world-app-55bf9c44b4-hr2bl"
	
	
	==> storage-provisioner [812751f5e2a247ec37efb705c3eae0e2c65a9209dce8df8470218cf396718428] <==
	I1028 17:08:24.837029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 17:08:24.846747       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 17:08:24.846822       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 17:08:24.856451       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 17:08:24.856505       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"98a4a8f4-6ecd-4758-8640-ac8d02da712d", APIVersion:"v1", ResourceVersion:"926", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-803184_e9eb44be-1eca-44fd-a052-5a56aabaeb8b became leader
	I1028 17:08:24.856647       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-803184_e9eb44be-1eca-44fd-a052-5a56aabaeb8b!
	I1028 17:08:24.957775       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-803184_e9eb44be-1eca-44fd-a052-5a56aabaeb8b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-803184 -n addons-803184
helpers_test.go:261: (dbg) Run:  kubectl --context addons-803184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-hr2bl ingress-nginx-admission-create-xqrnj ingress-nginx-admission-patch-prp8k
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-803184 describe pod hello-world-app-55bf9c44b4-hr2bl ingress-nginx-admission-create-xqrnj ingress-nginx-admission-patch-prp8k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-803184 describe pod hello-world-app-55bf9c44b4-hr2bl ingress-nginx-admission-create-xqrnj ingress-nginx-admission-patch-prp8k: exit status 1 (68.258833ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-hr2bl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-803184/192.168.49.2
	Start Time:       Mon, 28 Oct 2024 17:13:49 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-29zb7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-29zb7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-hr2bl to addons-803184
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xqrnj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-prp8k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-803184 describe pod hello-world-app-55bf9c44b4-hr2bl ingress-nginx-admission-create-xqrnj ingress-nginx-admission-patch-prp8k: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-803184 addons disable ingress-dns --alsologtostderr -v=1: (1.346369111s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-803184 addons disable ingress --alsologtostderr -v=1: (7.614543582s)
--- FAIL: TestAddons/parallel/Ingress (151.58s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (314.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.17242ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-674zg" [37927340-66ab-4951-bd4b-59b0e0d01812] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003427408s
addons_test.go:402: (dbg) Run:  kubectl --context addons-803184 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-803184 top pods -n kube-system: exit status 1 (67.414782ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mc8s8, age: 3m15.590468458s

                                                
                                                
** /stderr **
I1028 17:11:25.592837  108914 retry.go:31] will retry after 4.183127633s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-803184 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-803184 top pods -n kube-system: exit status 1 (78.287379ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mc8s8, age: 3m19.852104548s

                                                
                                                
** /stderr **
I1028 17:11:29.854651  108914 retry.go:31] will retry after 3.905578697s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-803184 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-803184 top pods -n kube-system: exit status 1 (67.57705ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mc8s8, age: 3m23.826491926s

                                                
                                                
** /stderr **
I1028 17:11:33.829065  108914 retry.go:31] will retry after 5.723186404s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-803184 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-803184 top pods -n kube-system: exit status 1 (67.053446ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mc8s8, age: 3m29.618398167s

                                                
                                                
** /stderr **
I1028 17:11:39.620622  108914 retry.go:31] will retry after 14.773277644s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-803184 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-803184 top pods -n kube-system: exit status 1 (78.236017ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mc8s8, age: 3m44.469156594s

                                                
                                                
** /stderr **
I1028 17:11:54.472532  108914 retry.go:31] will retry after 13.804861132s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-803184 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-803184 top pods -n kube-system: exit status 1 (64.083537ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mc8s8, age: 3m58.339923157s

                                                
                                                
** /stderr **
I1028 17:12:08.342341  108914 retry.go:31] will retry after 22.837733068s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-803184 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-803184 top pods -n kube-system: exit status 1 (63.925333ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mc8s8, age: 4m21.244022769s

                                                
                                                
** /stderr **
I1028 17:12:31.246584  108914 retry.go:31] will retry after 40.258787529s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-803184 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-803184 top pods -n kube-system: exit status 1 (65.302261ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mc8s8, age: 5m1.570750436s

                                                
                                                
** /stderr **
I1028 17:13:11.573410  108914 retry.go:31] will retry after 41.623262028s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-803184 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-803184 top pods -n kube-system: exit status 1 (64.308705ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mc8s8, age: 5m43.259779415s

                                                
                                                
** /stderr **
I1028 17:13:53.262316  108914 retry.go:31] will retry after 1m26.228416875s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-803184 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-803184 top pods -n kube-system: exit status 1 (63.054631ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mc8s8, age: 7m9.552540601s

                                                
                                                
** /stderr **
I1028 17:15:19.554634  108914 retry.go:31] will retry after 1m11.736521407s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-803184 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-803184 top pods -n kube-system: exit status 1 (67.577968ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mc8s8, age: 8m21.358610181s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-803184
helpers_test.go:235: (dbg) docker inspect addons-803184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8beae7471f18f3b528980ef294fe11c32142d2c34b446f3c61cf7e2c40d4f6a7",
	        "Created": "2024-10-28T17:07:48.694578747Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 111014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-28T17:07:48.828704388Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b614a1ff29c6e85b537175184edffd528c6bd99b5b0eb51bb6059bd4ad5ba0a2",
	        "ResolvConfPath": "/var/lib/docker/containers/8beae7471f18f3b528980ef294fe11c32142d2c34b446f3c61cf7e2c40d4f6a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8beae7471f18f3b528980ef294fe11c32142d2c34b446f3c61cf7e2c40d4f6a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/8beae7471f18f3b528980ef294fe11c32142d2c34b446f3c61cf7e2c40d4f6a7/hosts",
	        "LogPath": "/var/lib/docker/containers/8beae7471f18f3b528980ef294fe11c32142d2c34b446f3c61cf7e2c40d4f6a7/8beae7471f18f3b528980ef294fe11c32142d2c34b446f3c61cf7e2c40d4f6a7-json.log",
	        "Name": "/addons-803184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-803184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-803184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/642222271cbb99da9a64969a254fb19d9ae6e0fee6b1b57d6ac603c6339654da-init/diff:/var/lib/docker/overlay2/6f44dcb837d0e69b1b3a1c42f8a8e838d4ec916efe93e3f6d6a8c0411f4e43e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/642222271cbb99da9a64969a254fb19d9ae6e0fee6b1b57d6ac603c6339654da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/642222271cbb99da9a64969a254fb19d9ae6e0fee6b1b57d6ac603c6339654da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/642222271cbb99da9a64969a254fb19d9ae6e0fee6b1b57d6ac603c6339654da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-803184",
	                "Source": "/var/lib/docker/volumes/addons-803184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-803184",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-803184",
	                "name.minikube.sigs.k8s.io": "addons-803184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fd0a18dbf335a437e9015f60020c8a0e160ebabba8b9ad55a900b4d1378f85ee",
	            "SandboxKey": "/var/run/docker/netns/fd0a18dbf335",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-803184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c41606722e6f3e1ef41cf3f5ba84835c6a256c1b4bab5daeeca0436af7c726e2",
	                    "EndpointID": "b0c15c4462e901eec2425a62b5c711f7c90e55b4a5a1af61771147dd7062d9c7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-803184",
	                        "8beae7471f18"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-803184 -n addons-803184
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-803184 logs -n 25: (1.126574353s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-179742                                                                   | download-docker-179742 | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:07 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-988801   | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC |                     |
	|         | binary-mirror-988801                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35689                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-988801                                                                     | binary-mirror-988801   | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:07 UTC |
	| addons  | disable dashboard -p                                                                        | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC |                     |
	|         | addons-803184                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC |                     |
	|         | addons-803184                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-803184 --wait=true                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:10 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:10 UTC | 28 Oct 24 17:10 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:10 UTC | 28 Oct 24 17:10 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:10 UTC | 28 Oct 24 17:10 UTC |
	|         | -p addons-803184                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-803184 ip                                                                            | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-803184 addons                                                                        | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-803184 addons                                                                        | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-803184 addons                                                                        | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-803184 ssh cat                                                                       | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | /opt/local-path-provisioner/pvc-6dbabf11-4f7e-4e00-b596-30d9d2fb3ea8_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-803184 ssh curl -s                                                                   | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-803184 addons                                                                        | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-803184 addons                                                                        | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:12 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-803184 ip                                                                            | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:13 UTC | 28 Oct 24 17:13 UTC |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:13 UTC | 28 Oct 24 17:13 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-803184 addons disable                                                                | addons-803184          | jenkins | v1.34.0 | 28 Oct 24 17:13 UTC | 28 Oct 24 17:14 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:07:24
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:07:24.785481  110282 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:07:24.785605  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:07:24.785614  110282 out.go:358] Setting ErrFile to fd 2...
	I1028 17:07:24.785618  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:07:24.785783  110282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
	I1028 17:07:24.786455  110282 out.go:352] Setting JSON to false
	I1028 17:07:24.787343  110282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2986,"bootTime":1730132259,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:07:24.787452  110282 start.go:139] virtualization: kvm guest
	I1028 17:07:24.819760  110282 out.go:177] * [addons-803184] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:07:24.901159  110282 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:07:24.901163  110282 notify.go:220] Checking for updates...
	I1028 17:07:25.037563  110282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:07:25.122388  110282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig
	I1028 17:07:25.206023  110282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube
	I1028 17:07:25.277619  110282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:07:25.359594  110282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:07:25.423887  110282 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:07:25.444617  110282 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 17:07:25.444732  110282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 17:07:25.491224  110282 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2024-10-28 17:07:25.481654457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 17:07:25.491321  110282 docker.go:318] overlay module found
	I1028 17:07:25.611590  110282 out.go:177] * Using the docker driver based on user configuration
	I1028 17:07:25.683521  110282 start.go:297] selected driver: docker
	I1028 17:07:25.683550  110282 start.go:901] validating driver "docker" against <nil>
	I1028 17:07:25.683565  110282 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:07:25.684391  110282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 17:07:25.731052  110282 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2024-10-28 17:07:25.722023507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 17:07:25.731232  110282 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 17:07:25.731495  110282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:07:25.766395  110282 out.go:177] * Using Docker driver with root privileges
	I1028 17:07:25.809308  110282 cni.go:84] Creating CNI manager for ""
	I1028 17:07:25.809393  110282 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 17:07:25.809405  110282 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 17:07:25.809499  110282 start.go:340] cluster config:
	{Name:addons-803184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-803184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:07:25.851871  110282 out.go:177] * Starting "addons-803184" primary control-plane node in "addons-803184" cluster
	I1028 17:07:25.934036  110282 cache.go:121] Beginning downloading kic base image for docker with crio
	I1028 17:07:26.015885  110282 out.go:177] * Pulling base image v0.0.45-1730110049-19872 ...
	I1028 17:07:26.141305  110282 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 in local docker daemon
	I1028 17:07:26.141315  110282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:07:26.141428  110282 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-102136/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 17:07:26.141443  110282 cache.go:56] Caching tarball of preloaded images
	I1028 17:07:26.141551  110282 preload.go:172] Found /home/jenkins/minikube-integration/19872-102136/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:07:26.141564  110282 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:07:26.141902  110282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/config.json ...
	I1028 17:07:26.141930  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/config.json: {Name:mka4295eb11d0690c289fe7ea69051b27a134fa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:26.157670  110282 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 to local cache
	I1028 17:07:26.157812  110282 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 in local cache directory
	I1028 17:07:26.157837  110282 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 in local cache directory, skipping pull
	I1028 17:07:26.157846  110282 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 exists in cache, skipping pull
	I1028 17:07:26.157862  110282 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 as a tarball
	I1028 17:07:26.157870  110282 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 from local cache
	I1028 17:07:38.623063  110282 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 from cached tarball
	I1028 17:07:38.623106  110282 cache.go:194] Successfully downloaded all kic artifacts
	I1028 17:07:38.623156  110282 start.go:360] acquireMachinesLock for addons-803184: {Name:mkc61bd3c490082ef7b102a5ec0ecfb79ea6ac1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:07:38.623271  110282 start.go:364] duration metric: took 88.743µs to acquireMachinesLock for "addons-803184"
	I1028 17:07:38.623302  110282 start.go:93] Provisioning new machine with config: &{Name:addons-803184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-803184 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:07:38.623372  110282 start.go:125] createHost starting for "" (driver="docker")
	I1028 17:07:38.625399  110282 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1028 17:07:38.625637  110282 start.go:159] libmachine.API.Create for "addons-803184" (driver="docker")
	I1028 17:07:38.625673  110282 client.go:168] LocalClient.Create starting
	I1028 17:07:38.625788  110282 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca.pem
	I1028 17:07:38.795006  110282 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/cert.pem
	I1028 17:07:38.957380  110282 cli_runner.go:164] Run: docker network inspect addons-803184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1028 17:07:38.973849  110282 cli_runner.go:211] docker network inspect addons-803184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1028 17:07:38.973927  110282 network_create.go:284] running [docker network inspect addons-803184] to gather additional debugging logs...
	I1028 17:07:38.973950  110282 cli_runner.go:164] Run: docker network inspect addons-803184
	W1028 17:07:38.989982  110282 cli_runner.go:211] docker network inspect addons-803184 returned with exit code 1
	I1028 17:07:38.990025  110282 network_create.go:287] error running [docker network inspect addons-803184]: docker network inspect addons-803184: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-803184 not found
	I1028 17:07:38.990040  110282 network_create.go:289] output of [docker network inspect addons-803184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-803184 not found
	
	** /stderr **
	I1028 17:07:38.990196  110282 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1028 17:07:39.006877  110282 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021ed1f0}
	I1028 17:07:39.006930  110282 network_create.go:124] attempt to create docker network addons-803184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1028 17:07:39.006994  110282 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-803184 addons-803184
	I1028 17:07:39.072386  110282 network_create.go:108] docker network addons-803184 192.168.49.0/24 created
	I1028 17:07:39.072419  110282 kic.go:121] calculated static IP "192.168.49.2" for the "addons-803184" container
	I1028 17:07:39.072492  110282 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1028 17:07:39.087433  110282 cli_runner.go:164] Run: docker volume create addons-803184 --label name.minikube.sigs.k8s.io=addons-803184 --label created_by.minikube.sigs.k8s.io=true
	I1028 17:07:39.105020  110282 oci.go:103] Successfully created a docker volume addons-803184
	I1028 17:07:39.105148  110282 cli_runner.go:164] Run: docker run --rm --name addons-803184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-803184 --entrypoint /usr/bin/test -v addons-803184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 -d /var/lib
	I1028 17:07:44.089434  110282 cli_runner.go:217] Completed: docker run --rm --name addons-803184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-803184 --entrypoint /usr/bin/test -v addons-803184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 -d /var/lib: (4.984236069s)
	I1028 17:07:44.089467  110282 oci.go:107] Successfully prepared a docker volume addons-803184
	I1028 17:07:44.089483  110282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:07:44.089511  110282 kic.go:194] Starting extracting preloaded images to volume ...
	I1028 17:07:44.089571  110282 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19872-102136/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-803184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 -I lz4 -xf /preloaded.tar -C /extractDir
	I1028 17:07:48.635465  110282 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19872-102136/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-803184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 -I lz4 -xf /preloaded.tar -C /extractDir: (4.545846919s)
	I1028 17:07:48.635502  110282 kic.go:203] duration metric: took 4.545989612s to extract preloaded images to volume ...
	W1028 17:07:48.635640  110282 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1028 17:07:48.635733  110282 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1028 17:07:48.679892  110282 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-803184 --name addons-803184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-803184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-803184 --network addons-803184 --ip 192.168.49.2 --volume addons-803184:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9
	I1028 17:07:49.004091  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Running}}
	I1028 17:07:49.022667  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:07:49.041064  110282 cli_runner.go:164] Run: docker exec addons-803184 stat /var/lib/dpkg/alternatives/iptables
	I1028 17:07:49.082982  110282 oci.go:144] the created container "addons-803184" has a running status.
	I1028 17:07:49.083025  110282 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa...
	I1028 17:07:49.174857  110282 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1028 17:07:49.195421  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:07:49.213492  110282 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1028 17:07:49.213514  110282 kic_runner.go:114] Args: [docker exec --privileged addons-803184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1028 17:07:49.259054  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:07:49.278611  110282 machine.go:93] provisionDockerMachine start ...
	I1028 17:07:49.278704  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:49.296905  110282 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:49.297123  110282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 17:07:49.297142  110282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 17:07:49.297938  110282 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55408->127.0.0.1:32768: read: connection reset by peer
	I1028 17:07:52.415421  110282 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-803184
	
	I1028 17:07:52.415453  110282 ubuntu.go:169] provisioning hostname "addons-803184"
	I1028 17:07:52.415543  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:52.432679  110282 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:52.432889  110282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 17:07:52.432906  110282 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-803184 && echo "addons-803184" | sudo tee /etc/hostname
	I1028 17:07:52.559459  110282 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-803184
	
	I1028 17:07:52.559540  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:52.576689  110282 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:52.576871  110282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 17:07:52.576887  110282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-803184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-803184/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-803184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:07:52.692092  110282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:07:52.692122  110282 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19872-102136/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-102136/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-102136/.minikube}
	I1028 17:07:52.692140  110282 ubuntu.go:177] setting up certificates
	I1028 17:07:52.692151  110282 provision.go:84] configureAuth start
	I1028 17:07:52.692213  110282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-803184
	I1028 17:07:52.708582  110282 provision.go:143] copyHostCerts
	I1028 17:07:52.708673  110282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-102136/.minikube/ca.pem (1078 bytes)
	I1028 17:07:52.708786  110282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-102136/.minikube/cert.pem (1123 bytes)
	I1028 17:07:52.708845  110282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-102136/.minikube/key.pem (1679 bytes)
	I1028 17:07:52.708893  110282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-102136/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca-key.pem org=jenkins.addons-803184 san=[127.0.0.1 192.168.49.2 addons-803184 localhost minikube]
	I1028 17:07:52.894995  110282 provision.go:177] copyRemoteCerts
	I1028 17:07:52.895079  110282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:07:52.895122  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:52.911979  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:07:52.996900  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 17:07:53.020610  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 17:07:53.043655  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 17:07:53.065491  110282 provision.go:87] duration metric: took 373.32663ms to configureAuth
	I1028 17:07:53.065520  110282 ubuntu.go:193] setting minikube options for container-runtime
	I1028 17:07:53.065734  110282 config.go:182] Loaded profile config "addons-803184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:07:53.065851  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:53.081895  110282 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:53.082077  110282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 17:07:53.082093  110282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:07:53.284337  110282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:07:53.284372  110282 machine.go:96] duration metric: took 4.005737045s to provisionDockerMachine
	I1028 17:07:53.284387  110282 client.go:171] duration metric: took 14.658702752s to LocalClient.Create
	I1028 17:07:53.284415  110282 start.go:167] duration metric: took 14.65877684s to libmachine.API.Create "addons-803184"
	I1028 17:07:53.284428  110282 start.go:293] postStartSetup for "addons-803184" (driver="docker")
	I1028 17:07:53.284444  110282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:07:53.284521  110282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:07:53.284579  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:53.302157  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:07:53.392857  110282 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:07:53.396157  110282 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1028 17:07:53.396203  110282 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1028 17:07:53.396215  110282 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1028 17:07:53.396226  110282 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1028 17:07:53.396240  110282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-102136/.minikube/addons for local assets ...
	I1028 17:07:53.396321  110282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-102136/.minikube/files for local assets ...
	I1028 17:07:53.396367  110282 start.go:296] duration metric: took 111.930763ms for postStartSetup
	I1028 17:07:53.396744  110282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-803184
	I1028 17:07:53.413279  110282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/config.json ...
	I1028 17:07:53.413578  110282 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 17:07:53.413625  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:53.430597  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:07:53.512577  110282 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1028 17:07:53.516815  110282 start.go:128] duration metric: took 14.893424884s to createHost
	I1028 17:07:53.516850  110282 start.go:83] releasing machines lock for "addons-803184", held for 14.893563934s
	I1028 17:07:53.516919  110282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-803184
	I1028 17:07:53.533187  110282 ssh_runner.go:195] Run: cat /version.json
	I1028 17:07:53.533248  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:53.533263  110282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:07:53.533331  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:07:53.550481  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:07:53.551174  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:07:53.631904  110282 ssh_runner.go:195] Run: systemctl --version
	I1028 17:07:53.714102  110282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:07:53.858156  110282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 17:07:53.862436  110282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:07:53.880070  110282 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1028 17:07:53.880150  110282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:07:53.905937  110282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1028 17:07:53.905965  110282 start.go:495] detecting cgroup driver to use...
	I1028 17:07:53.906045  110282 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1028 17:07:53.906114  110282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:07:53.920610  110282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:07:53.931127  110282 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:07:53.931179  110282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:07:53.943658  110282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:07:53.956594  110282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:07:54.033812  110282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:07:54.111427  110282 docker.go:233] disabling docker service ...
	I1028 17:07:54.111497  110282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:07:54.129011  110282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:07:54.139816  110282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:07:54.213475  110282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:07:54.293971  110282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:07:54.304976  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:07:54.319807  110282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:07:54.319875  110282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:54.329521  110282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:07:54.329582  110282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:54.338870  110282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:54.348014  110282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:54.357714  110282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:07:54.366109  110282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:54.375082  110282 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:54.389468  110282 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:54.398204  110282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:07:54.405580  110282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:07:54.412963  110282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:07:54.488723  110282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:07:54.590396  110282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:07:54.590468  110282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:07:54.593988  110282 start.go:563] Will wait 60s for crictl version
	I1028 17:07:54.594045  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:07:54.597236  110282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:07:54.629387  110282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1028 17:07:54.629498  110282 ssh_runner.go:195] Run: crio --version
	I1028 17:07:54.663925  110282 ssh_runner.go:195] Run: crio --version
	I1028 17:07:54.700469  110282 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1028 17:07:54.701791  110282 cli_runner.go:164] Run: docker network inspect addons-803184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1028 17:07:54.718227  110282 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1028 17:07:54.721820  110282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:07:54.732369  110282 kubeadm.go:883] updating cluster {Name:addons-803184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-803184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 17:07:54.732495  110282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:07:54.732544  110282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:07:54.798937  110282 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:07:54.798959  110282 crio.go:433] Images already preloaded, skipping extraction
	I1028 17:07:54.799006  110282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:07:54.830801  110282 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:07:54.830827  110282 cache_images.go:84] Images are preloaded, skipping loading
	I1028 17:07:54.830835  110282 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1028 17:07:54.830923  110282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-803184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-803184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:07:54.830982  110282 ssh_runner.go:195] Run: crio config
	I1028 17:07:54.872355  110282 cni.go:84] Creating CNI manager for ""
	I1028 17:07:54.872379  110282 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 17:07:54.872389  110282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 17:07:54.872411  110282 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-803184 NodeName:addons-803184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 17:07:54.872526  110282 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-803184"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 17:07:54.872583  110282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:07:54.880919  110282 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 17:07:54.880982  110282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 17:07:54.889159  110282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1028 17:07:54.905317  110282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:07:54.921905  110282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1028 17:07:54.938561  110282 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1028 17:07:54.942050  110282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:07:54.951988  110282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:07:55.025295  110282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:07:55.037847  110282 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184 for IP: 192.168.49.2
	I1028 17:07:55.037871  110282 certs.go:194] generating shared ca certs ...
	I1028 17:07:55.037887  110282 certs.go:226] acquiring lock for ca certs: {Name:mke618d91ba42d60684aa6c76238fe0c56bd6c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.038022  110282 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-102136/.minikube/ca.key
	I1028 17:07:55.221170  110282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-102136/.minikube/ca.crt ...
	I1028 17:07:55.221205  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/ca.crt: {Name:mkea952d3a2fb13dbfe6a1ba11e87b0120210fff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.221375  110282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-102136/.minikube/ca.key ...
	I1028 17:07:55.221387  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/ca.key: {Name:mk67b003eb4a44232118a840252d439a691101af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.221456  110282 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-102136/.minikube/proxy-client-ca.key
	I1028 17:07:55.265245  110282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-102136/.minikube/proxy-client-ca.crt ...
	I1028 17:07:55.265279  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/proxy-client-ca.crt: {Name:mka71c04b2d5424d029443f6c74127f148cf7288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.265450  110282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-102136/.minikube/proxy-client-ca.key ...
	I1028 17:07:55.265461  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/proxy-client-ca.key: {Name:mka77769e743f683a6ab4fdb3dd21af12021995d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.265529  110282 certs.go:256] generating profile certs ...
	I1028 17:07:55.265585  110282 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.key
	I1028 17:07:55.265599  110282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt with IP's: []
	I1028 17:07:55.357955  110282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt ...
	I1028 17:07:55.357995  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: {Name:mk8ef6723fb0846854c8585d89a3380fb5acecd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.358233  110282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.key ...
	I1028 17:07:55.358253  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.key: {Name:mka17a90b9e3666830a753c51adf6b99a61b7470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.358351  110282 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.key.40e5cc3b
	I1028 17:07:55.358376  110282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.crt.40e5cc3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1028 17:07:55.455942  110282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.crt.40e5cc3b ...
	I1028 17:07:55.455980  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.crt.40e5cc3b: {Name:mk236bb2e10dd05e3e31388872425979dd48f603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.456171  110282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.key.40e5cc3b ...
	I1028 17:07:55.456188  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.key.40e5cc3b: {Name:mk830c79ae7973f1db99722984141bac65df621c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.456286  110282 certs.go:381] copying /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.crt.40e5cc3b -> /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.crt
	I1028 17:07:55.456386  110282 certs.go:385] copying /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.key.40e5cc3b -> /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.key
	I1028 17:07:55.456459  110282 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.key
	I1028 17:07:55.456484  110282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.crt with IP's: []
	I1028 17:07:55.521505  110282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.crt ...
	I1028 17:07:55.521543  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.crt: {Name:mk4f1190c5c8590c533d5b1dd4dc3bb25b064e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.521745  110282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.key ...
	I1028 17:07:55.521764  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.key: {Name:mkc24ee91e50b508a48f41bf10116699139cf180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:55.521969  110282 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:07:55.522020  110282 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/ca.pem (1078 bytes)
	I1028 17:07:55.522064  110282 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:07:55.522100  110282 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-102136/.minikube/certs/key.pem (1679 bytes)
	I1028 17:07:55.522737  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:07:55.545757  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:07:55.567446  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:07:55.590481  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:07:55.612500  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 17:07:55.633864  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 17:07:55.656440  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:07:55.678426  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:07:55.700702  110282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-102136/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:07:55.722438  110282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 17:07:55.738471  110282 ssh_runner.go:195] Run: openssl version
	I1028 17:07:55.743666  110282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:07:55.752326  110282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:07:55.755476  110282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:07:55.755533  110282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:07:55.761785  110282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:07:55.770959  110282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:07:55.774082  110282 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:07:55.774151  110282 kubeadm.go:392] StartCluster: {Name:addons-803184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-803184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:07:55.774240  110282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 17:07:55.774302  110282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 17:07:55.806505  110282 cri.go:89] found id: ""
	I1028 17:07:55.806565  110282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 17:07:55.814904  110282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 17:07:55.823510  110282 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1028 17:07:55.823566  110282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 17:07:55.832283  110282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 17:07:55.832304  110282 kubeadm.go:157] found existing configuration files:
	
	I1028 17:07:55.832356  110282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 17:07:55.840876  110282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 17:07:55.840950  110282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 17:07:55.848915  110282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 17:07:55.856851  110282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 17:07:55.856913  110282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 17:07:55.864561  110282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 17:07:55.872513  110282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 17:07:55.872584  110282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 17:07:55.880329  110282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 17:07:55.888998  110282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 17:07:55.889059  110282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 17:07:55.896652  110282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1028 17:07:55.931107  110282 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 17:07:55.931529  110282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 17:07:55.947254  110282 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1028 17:07:55.947328  110282 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-gcp
	I1028 17:07:55.947375  110282 kubeadm.go:310] OS: Linux
	I1028 17:07:55.947428  110282 kubeadm.go:310] CGROUPS_CPU: enabled
	I1028 17:07:55.947534  110282 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1028 17:07:55.947623  110282 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1028 17:07:55.947712  110282 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1028 17:07:55.947824  110282 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1028 17:07:55.947937  110282 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1028 17:07:55.948003  110282 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1028 17:07:55.948068  110282 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1028 17:07:55.948136  110282 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1028 17:07:55.996704  110282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 17:07:55.996894  110282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 17:07:55.997059  110282 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 17:07:56.002862  110282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 17:07:56.006220  110282 out.go:235]   - Generating certificates and keys ...
	I1028 17:07:56.006349  110282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 17:07:56.006426  110282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 17:07:56.323169  110282 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 17:07:56.676683  110282 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 17:07:56.807956  110282 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 17:07:56.922858  110282 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 17:07:57.203303  110282 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 17:07:57.203420  110282 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-803184 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1028 17:07:57.303190  110282 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 17:07:57.303328  110282 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-803184 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1028 17:07:57.383570  110282 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 17:07:57.471883  110282 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 17:07:57.563114  110282 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 17:07:57.563208  110282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 17:07:57.714733  110282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 17:07:57.908204  110282 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 17:07:58.340533  110282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 17:07:58.395621  110282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 17:07:58.628328  110282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 17:07:58.628791  110282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 17:07:58.631339  110282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 17:07:58.633252  110282 out.go:235]   - Booting up control plane ...
	I1028 17:07:58.633386  110282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 17:07:58.633515  110282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 17:07:58.634054  110282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 17:07:58.642867  110282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 17:07:58.648133  110282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 17:07:58.648233  110282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 17:07:58.727624  110282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 17:07:58.727900  110282 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 17:07:59.229173  110282 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.66073ms
	I1028 17:07:59.229301  110282 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 17:08:03.730581  110282 kubeadm.go:310] [api-check] The API server is healthy after 4.501368226s
	I1028 17:08:03.742206  110282 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 17:08:03.754514  110282 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 17:08:03.772913  110282 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 17:08:03.773130  110282 kubeadm.go:310] [mark-control-plane] Marking the node addons-803184 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 17:08:03.782672  110282 kubeadm.go:310] [bootstrap-token] Using token: mi4vsm.4k04m6igvyo5znl6
	I1028 17:08:03.785272  110282 out.go:235]   - Configuring RBAC rules ...
	I1028 17:08:03.785438  110282 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 17:08:03.790737  110282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 17:08:03.798163  110282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 17:08:03.803283  110282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 17:08:03.806136  110282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 17:08:03.808875  110282 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 17:08:04.136044  110282 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 17:08:04.561039  110282 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 17:08:05.138770  110282 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 17:08:05.139869  110282 kubeadm.go:310] 
	I1028 17:08:05.139960  110282 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 17:08:05.139978  110282 kubeadm.go:310] 
	I1028 17:08:05.140067  110282 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 17:08:05.140076  110282 kubeadm.go:310] 
	I1028 17:08:05.140107  110282 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 17:08:05.140198  110282 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 17:08:05.140265  110282 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 17:08:05.140274  110282 kubeadm.go:310] 
	I1028 17:08:05.140346  110282 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 17:08:05.140384  110282 kubeadm.go:310] 
	I1028 17:08:05.140441  110282 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 17:08:05.140456  110282 kubeadm.go:310] 
	I1028 17:08:05.140520  110282 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 17:08:05.140616  110282 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 17:08:05.140671  110282 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 17:08:05.140678  110282 kubeadm.go:310] 
	I1028 17:08:05.140746  110282 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 17:08:05.140854  110282 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 17:08:05.140869  110282 kubeadm.go:310] 
	I1028 17:08:05.140998  110282 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mi4vsm.4k04m6igvyo5znl6 \
	I1028 17:08:05.141140  110282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0dd8f5c133ceac1a3915b25678ee9c11eaa82810533cc630f757b22eb21d5ee3 \
	I1028 17:08:05.141170  110282 kubeadm.go:310] 	--control-plane 
	I1028 17:08:05.141179  110282 kubeadm.go:310] 
	I1028 17:08:05.141281  110282 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 17:08:05.141291  110282 kubeadm.go:310] 
	I1028 17:08:05.141386  110282 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mi4vsm.4k04m6igvyo5znl6 \
	I1028 17:08:05.141515  110282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0dd8f5c133ceac1a3915b25678ee9c11eaa82810533cc630f757b22eb21d5ee3 
	I1028 17:08:05.143812  110282 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-gcp\n", err: exit status 1
	I1028 17:08:05.143940  110282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 17:08:05.143961  110282 cni.go:84] Creating CNI manager for ""
	I1028 17:08:05.143970  110282 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 17:08:05.146476  110282 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 17:08:05.147940  110282 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 17:08:05.151584  110282 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 17:08:05.151604  110282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 17:08:05.168663  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 17:08:05.362839  110282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 17:08:05.362931  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:05.362967  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-803184 minikube.k8s.io/updated_at=2024_10_28T17_08_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=addons-803184 minikube.k8s.io/primary=true
	I1028 17:08:05.370342  110282 ops.go:34] apiserver oom_adj: -16
	I1028 17:08:05.459280  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:05.960312  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:06.459536  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:06.960298  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:07.459941  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:07.960246  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:08.459817  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:08.959816  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:09.459535  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:09.960088  110282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:08:10.033473  110282 kubeadm.go:1113] duration metric: took 4.670616888s to wait for elevateKubeSystemPrivileges
	I1028 17:08:10.033515  110282 kubeadm.go:394] duration metric: took 14.259369739s to StartCluster
	I1028 17:08:10.033539  110282 settings.go:142] acquiring lock: {Name:mk5660b45458ca6389d875a5473d75a5cb1d1df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:08:10.033662  110282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-102136/kubeconfig
	I1028 17:08:10.034180  110282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-102136/kubeconfig: {Name:mk9c3758014b9f711e0c502c4f4a5172f5e22b45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:08:10.034483  110282 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:08:10.034700  110282 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1028 17:08:10.034830  110282 addons.go:69] Setting yakd=true in profile "addons-803184"
	I1028 17:08:10.034854  110282 addons.go:234] Setting addon yakd=true in "addons-803184"
	I1028 17:08:10.034884  110282 config.go:182] Loaded profile config "addons-803184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:08:10.034902  110282 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-803184"
	I1028 17:08:10.034914  110282 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-803184"
	I1028 17:08:10.034889  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.034937  110282 addons.go:69] Setting cloud-spanner=true in profile "addons-803184"
	I1028 17:08:10.034949  110282 addons.go:234] Setting addon cloud-spanner=true in "addons-803184"
	I1028 17:08:10.034963  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.035519  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.034771  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 17:08:10.034930  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.035613  110282 addons.go:69] Setting metrics-server=true in profile "addons-803184"
	I1028 17:08:10.035638  110282 addons.go:234] Setting addon metrics-server=true in "addons-803184"
	I1028 17:08:10.035678  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.035749  110282 addons.go:69] Setting storage-provisioner=true in profile "addons-803184"
	I1028 17:08:10.035777  110282 addons.go:234] Setting addon storage-provisioner=true in "addons-803184"
	I1028 17:08:10.035770  110282 addons.go:69] Setting gcp-auth=true in profile "addons-803184"
	I1028 17:08:10.035835  110282 mustload.go:65] Loading cluster: addons-803184
	I1028 17:08:10.035847  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.036027  110282 config.go:182] Loaded profile config "addons-803184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:08:10.036127  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.036146  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.036243  110282 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-803184"
	I1028 17:08:10.036290  110282 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-803184"
	I1028 17:08:10.036311  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.036317  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.036327  110282 addons.go:69] Setting default-storageclass=true in profile "addons-803184"
	I1028 17:08:10.036345  110282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-803184"
	I1028 17:08:10.036602  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.036774  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.036982  110282 addons.go:69] Setting volcano=true in profile "addons-803184"
	I1028 17:08:10.037007  110282 addons.go:234] Setting addon volcano=true in "addons-803184"
	I1028 17:08:10.037039  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.037510  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.037705  110282 addons.go:69] Setting volumesnapshots=true in profile "addons-803184"
	I1028 17:08:10.037725  110282 addons.go:234] Setting addon volumesnapshots=true in "addons-803184"
	I1028 17:08:10.037765  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.038223  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.039075  110282 addons.go:69] Setting inspektor-gadget=true in profile "addons-803184"
	I1028 17:08:10.039096  110282 addons.go:234] Setting addon inspektor-gadget=true in "addons-803184"
	I1028 17:08:10.039130  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.039626  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.036312  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.041997  110282 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-803184"
	I1028 17:08:10.042028  110282 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-803184"
	I1028 17:08:10.042154  110282 addons.go:69] Setting ingress=true in profile "addons-803184"
	I1028 17:08:10.042216  110282 addons.go:234] Setting addon ingress=true in "addons-803184"
	I1028 17:08:10.042296  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.042436  110282 addons.go:69] Setting ingress-dns=true in profile "addons-803184"
	I1028 17:08:10.042517  110282 addons.go:234] Setting addon ingress-dns=true in "addons-803184"
	I1028 17:08:10.042591  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.042637  110282 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-803184"
	I1028 17:08:10.042668  110282 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-803184"
	I1028 17:08:10.042727  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.043329  110282 addons.go:69] Setting registry=true in profile "addons-803184"
	I1028 17:08:10.043383  110282 addons.go:234] Setting addon registry=true in "addons-803184"
	I1028 17:08:10.043425  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.043507  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.047670  110282 out.go:177] * Verifying Kubernetes components...
	I1028 17:08:10.035522  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.050323  110282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:08:10.068259  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.068260  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.068644  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.069239  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.081727  110282 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1028 17:08:10.083030  110282 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 17:08:10.083067  110282 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 17:08:10.083146  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.088652  110282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 17:08:10.090159  110282 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:08:10.090184  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 17:08:10.090249  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.107297  110282 addons.go:234] Setting addon default-storageclass=true in "addons-803184"
	I1028 17:08:10.107351  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.107741  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	W1028 17:08:10.108019  110282 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1028 17:08:10.109133  110282 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1028 17:08:10.109198  110282 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1028 17:08:10.109645  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1028 17:08:10.110377  110282 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1028 17:08:10.121181  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1028 17:08:10.121253  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.122995  110282 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1028 17:08:10.123118  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1028 17:08:10.123376  110282 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 17:08:10.123520  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.124551  110282 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 17:08:10.124569  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1028 17:08:10.124619  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.123566  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1028 17:08:10.125004  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.126005  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1028 17:08:10.127256  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1028 17:08:10.127325  110282 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1028 17:08:10.132040  110282 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1028 17:08:10.132072  110282 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1028 17:08:10.132145  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.132302  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1028 17:08:10.132465  110282 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 17:08:10.133626  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.134267  110282 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1028 17:08:10.138924  110282 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 17:08:10.139065  110282 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1028 17:08:10.139081  110282 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1028 17:08:10.139149  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.139370  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1028 17:08:10.139422  110282 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1028 17:08:10.141042  110282 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 17:08:10.141074  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1028 17:08:10.141128  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.141349  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1028 17:08:10.141461  110282 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1028 17:08:10.141475  110282 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1028 17:08:10.141538  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.146115  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1028 17:08:10.147423  110282 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1028 17:08:10.148608  110282 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1028 17:08:10.148630  110282 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1028 17:08:10.148750  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.149852  110282 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1028 17:08:10.150861  110282 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1028 17:08:10.152107  110282 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 17:08:10.152125  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1028 17:08:10.152168  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.152285  110282 out.go:177]   - Using image docker.io/registry:2.8.3
	I1028 17:08:10.153499  110282 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1028 17:08:10.153523  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1028 17:08:10.153582  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.162368  110282 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-803184"
	I1028 17:08:10.162419  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:10.162858  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:10.163983  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.169440  110282 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 17:08:10.169462  110282 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 17:08:10.169508  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.186995  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.198927  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.204289  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.204281  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.207311  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.208995  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.214507  110282 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1028 17:08:10.214723  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.217114  110282 out.go:177]   - Using image docker.io/busybox:stable
	I1028 17:08:10.217730  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.217763  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.218470  110282 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 17:08:10.218490  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1028 17:08:10.218547  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:10.218812  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.220243  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.252010  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:10.438240  110282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:08:10.438394  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 17:08:10.447826  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:08:10.538337  110282 node_ready.go:35] waiting up to 6m0s for node "addons-803184" to be "Ready" ...
	I1028 17:08:10.638532  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1028 17:08:10.646721  110282 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1028 17:08:10.646805  110282 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1028 17:08:10.651033  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 17:08:10.729834  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 17:08:10.729960  110282 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1028 17:08:10.729981  110282 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1028 17:08:10.732574  110282 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1028 17:08:10.732601  110282 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1028 17:08:10.739340  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 17:08:10.744874  110282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 17:08:10.744902  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1028 17:08:10.749961  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 17:08:10.755927  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 17:08:10.829282  110282 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1028 17:08:10.829382  110282 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1028 17:08:10.829853  110282 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1028 17:08:10.829926  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1028 17:08:10.846589  110282 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1028 17:08:10.846621  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1028 17:08:10.849286  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 17:08:10.947723  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1028 17:08:10.949137  110282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 17:08:10.949223  110282 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 17:08:11.028742  110282 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1028 17:08:11.028841  110282 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1028 17:08:11.145157  110282 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1028 17:08:11.145251  110282 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1028 17:08:11.229707  110282 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1028 17:08:11.229801  110282 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1028 17:08:11.234100  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1028 17:08:11.247204  110282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 17:08:11.247310  110282 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 17:08:11.529576  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 17:08:11.541795  110282 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1028 17:08:11.541824  110282 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1028 17:08:11.629260  110282 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1028 17:08:11.629360  110282 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1028 17:08:11.732222  110282 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1028 17:08:11.732324  110282 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1028 17:08:11.939740  110282 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.501301793s)
	I1028 17:08:11.939943  110282 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1028 17:08:12.134520  110282 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1028 17:08:12.134615  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1028 17:08:12.335863  110282 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1028 17:08:12.335965  110282 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1028 17:08:12.340811  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1028 17:08:12.346103  110282 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1028 17:08:12.346134  110282 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1028 17:08:12.735694  110282 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-803184" context rescaled to 1 replicas
	I1028 17:08:12.841169  110282 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1028 17:08:12.841201  110282 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1028 17:08:12.842123  110282 node_ready.go:53] node "addons-803184" has status "Ready":"False"
	I1028 17:08:13.032044  110282 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1028 17:08:13.032140  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1028 17:08:13.141442  110282 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 17:08:13.141534  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1028 17:08:13.529469  110282 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1028 17:08:13.529501  110282 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1028 17:08:13.628578  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 17:08:13.734129  110282 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1028 17:08:13.734171  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1028 17:08:13.846625  110282 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1028 17:08:13.846656  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1028 17:08:14.029797  110282 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 17:08:14.029885  110282 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1028 17:08:14.129474  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 17:08:14.241465  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.793587157s)
	I1028 17:08:14.241592  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.60302688s)
	I1028 17:08:15.050271  110282 node_ready.go:53] node "addons-803184" has status "Ready":"False"
	I1028 17:08:15.656273  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.005145645s)
	I1028 17:08:15.656315  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.926445805s)
	I1028 17:08:15.656339  110282 addons.go:475] Verifying addon ingress=true in "addons-803184"
	I1028 17:08:15.656360  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.916992637s)
	I1028 17:08:15.656430  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.906429557s)
	I1028 17:08:15.656665  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.900702293s)
	I1028 17:08:15.656714  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.807390885s)
	I1028 17:08:15.656867  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.709103357s)
	I1028 17:08:15.656900  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.422717063s)
	I1028 17:08:15.656918  110282 addons.go:475] Verifying addon registry=true in "addons-803184"
	I1028 17:08:15.656975  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.127301939s)
	I1028 17:08:15.656992  110282 addons.go:475] Verifying addon metrics-server=true in "addons-803184"
	I1028 17:08:15.657177  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.316281384s)
	I1028 17:08:15.657967  110282 out.go:177] * Verifying registry addon...
	I1028 17:08:15.657993  110282 out.go:177] * Verifying ingress addon...
	I1028 17:08:15.659081  110282 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-803184 service yakd-dashboard -n yakd-dashboard
	
	I1028 17:08:15.661324  110282 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1028 17:08:15.661322  110282 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1028 17:08:15.731593  110282 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1028 17:08:15.731629  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:15.731900  110282 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1028 17:08:15.731969  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1028 17:08:15.732213  110282 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1028 17:08:16.167702  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:16.168403  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:16.455236  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.826535338s)
	W1028 17:08:16.455289  110282 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 17:08:16.455322  110282 retry.go:31] will retry after 337.107711ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 17:08:16.665404  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:16.666016  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:16.761544  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.631944388s)
	I1028 17:08:16.761593  110282 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-803184"
	I1028 17:08:16.763524  110282 out.go:177] * Verifying csi-hostpath-driver addon...
	I1028 17:08:16.766034  110282 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1028 17:08:16.770612  110282 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1028 17:08:16.770640  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:16.793453  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 17:08:17.165808  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:17.166300  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:17.269664  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:17.331635  110282 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1028 17:08:17.331707  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:17.349113  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:17.445621  110282 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1028 17:08:17.462282  110282 addons.go:234] Setting addon gcp-auth=true in "addons-803184"
	I1028 17:08:17.462347  110282 host.go:66] Checking if "addons-803184" exists ...
	I1028 17:08:17.462709  110282 cli_runner.go:164] Run: docker container inspect addons-803184 --format={{.State.Status}}
	I1028 17:08:17.479911  110282 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1028 17:08:17.479971  110282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-803184
	I1028 17:08:17.496691  110282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/addons-803184/id_rsa Username:docker}
	I1028 17:08:17.541608  110282 node_ready.go:53] node "addons-803184" has status "Ready":"False"
	I1028 17:08:17.664947  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:17.665374  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:17.769985  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:18.165187  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:18.165520  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:18.270092  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:18.664773  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:18.665133  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:18.770002  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:19.165697  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:19.166161  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:19.269902  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:19.270771  110282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.47726362s)
	I1028 17:08:19.270841  110282 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.790897948s)
	I1028 17:08:19.272848  110282 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1028 17:08:19.274458  110282 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 17:08:19.275859  110282 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1028 17:08:19.275884  110282 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1028 17:08:19.293289  110282 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1028 17:08:19.293321  110282 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1028 17:08:19.309748  110282 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 17:08:19.309772  110282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1028 17:08:19.326199  110282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 17:08:19.542127  110282 node_ready.go:53] node "addons-803184" has status "Ready":"False"
	I1028 17:08:19.642364  110282 addons.go:475] Verifying addon gcp-auth=true in "addons-803184"
	I1028 17:08:19.643993  110282 out.go:177] * Verifying gcp-auth addon...
	I1028 17:08:19.646529  110282 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1028 17:08:19.649679  110282 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1028 17:08:19.649700  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:19.750743  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:19.751179  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:19.769590  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:20.150089  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:20.164819  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:20.165152  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:20.269630  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:20.650295  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:20.665058  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:20.665431  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:20.769960  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:21.149295  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:21.165015  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:21.165404  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:21.270003  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:21.650198  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:21.664869  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:21.665391  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:21.770015  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:22.041617  110282 node_ready.go:53] node "addons-803184" has status "Ready":"False"
	I1028 17:08:22.150265  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:22.164868  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:22.165426  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:22.269976  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:22.650008  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:22.664750  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:22.665180  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:22.770035  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:23.150562  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:23.165229  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:23.165781  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:23.269524  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:23.652291  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:23.670641  110282 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1028 17:08:23.670667  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:23.671005  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:23.771105  110282 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1028 17:08:23.771136  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:24.045055  110282 node_ready.go:49] node "addons-803184" has status "Ready":"True"
	I1028 17:08:24.045081  110282 node_ready.go:38] duration metric: took 13.506692553s for node "addons-803184" to be "Ready" ...
	I1028 17:08:24.045091  110282 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:08:24.054058  110282 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:24.149948  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:24.165238  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:24.165523  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:24.272123  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:24.652300  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:24.753473  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:24.753500  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:24.853680  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:25.150596  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:25.165516  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:25.165908  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:25.271463  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:25.651310  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:25.664845  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:25.665474  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:25.834244  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:26.060275  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:26.150653  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:26.165879  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:26.166112  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:26.271426  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:26.650617  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:26.666303  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:26.666598  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:26.770653  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:27.151295  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:27.165496  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:27.165637  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:27.271133  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:27.650463  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:27.665150  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:27.665422  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:27.770907  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:28.151054  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:28.166459  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:28.166768  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:28.271052  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:28.559626  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:28.650419  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:28.665455  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:28.665695  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:28.772752  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:29.150934  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:29.164585  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:29.164937  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:29.270971  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:29.650598  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:29.665906  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:29.666341  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:29.771405  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:30.150921  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:30.230271  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:30.230498  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:30.331230  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:30.560877  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:30.650773  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:30.664712  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:30.664950  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:30.771350  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:31.151020  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:31.165288  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:31.165585  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:31.271574  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:31.650632  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:31.665903  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:31.666517  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:31.770698  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:32.150425  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:32.165986  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:32.166311  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:32.271144  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:32.651132  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:32.665648  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:32.665724  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:32.771538  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:33.060192  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:33.150828  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:33.165482  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:33.165726  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:33.270875  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:33.650777  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:33.665000  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:33.665309  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:33.772549  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:34.150488  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:34.166123  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:34.167613  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:34.271246  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:34.650347  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:34.665686  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:34.666301  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:34.770640  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:35.060635  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:35.150801  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:35.164844  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:35.165261  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:35.271100  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:35.650776  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:35.664668  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:35.664768  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:35.770032  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:36.150503  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:36.165760  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:36.165974  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:36.270464  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:36.650403  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:36.665070  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:36.665376  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:36.770789  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:37.150711  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:37.164533  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:37.164903  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:37.271490  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:37.560106  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:37.649951  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:37.664987  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:37.665177  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:37.770949  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:38.150514  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:38.165644  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:38.165715  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:38.271658  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:38.649860  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:38.665709  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:38.666911  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:38.770777  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:39.150540  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:39.165486  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:39.165593  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:39.270982  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:39.560324  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:39.650014  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:39.665102  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:39.665721  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:39.770686  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:40.150089  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:40.164974  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:40.165219  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:40.271672  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:40.650863  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:40.664724  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:40.665238  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:40.770391  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:41.151138  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:41.165282  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:41.165549  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:41.270608  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:41.650022  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:41.665355  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:41.665767  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:41.771118  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:42.059739  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:42.150929  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:42.164869  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:42.165444  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:42.270595  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:42.649869  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:42.665048  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:42.665197  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:42.770305  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:43.151478  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:43.166221  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:43.166449  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:43.270932  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:43.650377  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:43.721368  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:43.721884  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:43.771095  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:44.150078  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:44.165256  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:44.165620  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:44.270757  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:44.560507  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:44.650255  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:44.665134  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:44.665395  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:44.771285  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:45.150254  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:45.165441  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:45.165547  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:45.271473  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:45.649689  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:45.664874  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:45.665251  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:45.770467  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:46.150651  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:46.165878  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:46.166280  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:46.270485  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:46.560771  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:46.650597  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:46.666432  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:46.666971  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:46.771260  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:47.151218  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:47.166352  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:47.167589  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:47.271728  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:47.649954  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:47.665410  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:47.665589  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:47.770814  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:48.149943  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:48.165667  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:48.166180  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:48.271418  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:48.650464  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:48.665790  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:48.665990  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:48.770247  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:49.060485  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:49.151287  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:49.165025  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:49.165213  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:49.270636  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:49.649855  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:49.664908  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:49.665340  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:49.770688  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:50.150818  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:50.165752  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:50.166372  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:50.271215  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:50.651021  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:50.665050  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:50.665201  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:50.785036  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:51.150229  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:51.165427  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:51.165899  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:51.270889  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:51.560455  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:51.650142  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:51.665108  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:51.665434  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:51.776425  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:52.150914  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:52.164951  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:52.165706  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:52.270191  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:52.650539  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:52.665635  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:52.665975  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:52.771184  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:53.150413  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:53.165599  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:53.165858  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:53.269940  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:53.650517  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:53.665497  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:53.665817  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:53.769691  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:54.060431  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:54.151223  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:54.166212  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:54.166659  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:54.271318  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:54.650586  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:54.665510  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:54.665708  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:54.770167  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:55.150116  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:55.165176  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:55.165434  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:55.270372  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:55.650776  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:55.665069  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:55.665424  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:55.770745  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:56.150294  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:56.165152  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:56.165430  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:56.270855  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:56.559561  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:56.650813  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:56.664537  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:56.664913  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:56.770385  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:57.150893  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:57.164820  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:57.165154  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:57.271199  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:57.650178  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:57.664894  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:57.665138  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:57.770678  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:58.149567  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:58.165886  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:58.166264  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:58.271120  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:58.559927  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:58.649821  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:58.666445  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:58.667004  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:58.771079  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:59.150540  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:59.165464  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:59.165808  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:59.270856  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:59.650398  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:59.665740  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:59.666070  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:59.770023  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:00.150782  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:00.165086  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:00.165337  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:00.270376  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:00.650608  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:00.665182  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:00.665668  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:00.770551  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:01.073723  110282 pod_ready.go:103] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:01.200848  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:01.200969  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:01.201534  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:01.272656  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:01.649597  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:01.665928  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:01.666504  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:01.771543  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:02.150560  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:02.165169  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:02.165528  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:02.271095  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:02.650506  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:02.665349  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:02.665781  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:02.770762  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:03.060648  110282 pod_ready.go:93] pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:03.060672  110282 pod_ready.go:82] duration metric: took 39.006583775s for pod "amd-gpu-device-plugin-jhlpw" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.060683  110282 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mc8s8" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.065593  110282 pod_ready.go:93] pod "coredns-7c65d6cfc9-mc8s8" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:03.065620  110282 pod_ready.go:82] duration metric: took 4.930204ms for pod "coredns-7c65d6cfc9-mc8s8" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.065642  110282 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.069861  110282 pod_ready.go:93] pod "etcd-addons-803184" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:03.069879  110282 pod_ready.go:82] duration metric: took 4.230851ms for pod "etcd-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.069891  110282 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.074031  110282 pod_ready.go:93] pod "kube-apiserver-addons-803184" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:03.074075  110282 pod_ready.go:82] duration metric: took 4.177055ms for pod "kube-apiserver-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.074086  110282 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.078207  110282 pod_ready.go:93] pod "kube-controller-manager-addons-803184" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:03.078232  110282 pod_ready.go:82] duration metric: took 4.140902ms for pod "kube-controller-manager-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.078245  110282 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rlsxn" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.150077  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:03.165065  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:03.165547  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:03.270949  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:03.458646  110282 pod_ready.go:93] pod "kube-proxy-rlsxn" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:03.458673  110282 pod_ready.go:82] duration metric: took 380.420923ms for pod "kube-proxy-rlsxn" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.458686  110282 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.651103  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:03.665097  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:03.665443  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:03.770559  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:03.858538  110282 pod_ready.go:93] pod "kube-scheduler-addons-803184" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:03.858565  110282 pod_ready.go:82] duration metric: took 399.869817ms for pod "kube-scheduler-addons-803184" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:03.858580  110282 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:04.150380  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:04.165292  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:04.165609  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:04.271891  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:04.649881  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:04.664818  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:04.665370  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:04.770340  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:05.150824  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:05.164761  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:05.164924  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:05.270324  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:05.650696  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:05.664567  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:05.665146  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:05.769868  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:05.864209  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:06.150543  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:06.165910  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:06.166363  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:06.269872  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:06.650552  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:06.665886  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:06.666461  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:06.770266  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:07.150892  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:07.165574  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:07.165791  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:07.270956  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:07.650024  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:07.732116  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:07.732658  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:07.834083  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:08.034902  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:08.233421  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:08.234539  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:08.235973  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:08.332535  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:08.650561  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:08.732416  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:08.732712  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:08.831953  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:09.150755  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:09.165158  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:09.165346  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:09.271484  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:09.650785  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:09.665833  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:09.666060  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:09.770670  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:10.150542  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:10.166495  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:10.166974  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:10.271347  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:10.365782  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:10.650816  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:10.664767  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:10.665482  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:10.770074  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:11.150598  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:11.166082  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:11.166591  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:11.270728  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:11.650322  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:11.665709  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:11.665927  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:11.769969  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:12.149471  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:12.165650  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:12.165879  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:12.270959  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:12.650090  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:12.751676  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:12.752128  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:12.771140  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:12.864301  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:13.150655  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:13.252203  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:13.252501  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:13.270442  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:13.650528  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:13.665619  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:13.665868  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:13.769550  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:14.150060  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:14.165130  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:14.165247  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:14.271009  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:14.650417  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:14.665383  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:14.665690  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:14.770375  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:14.865451  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:15.149979  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:15.165098  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:15.165362  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:15.270859  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:15.650189  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:15.665465  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:15.665641  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:15.771033  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:16.150495  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:16.165459  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:16.165689  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:16.269789  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:16.650472  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:16.665657  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:16.666189  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:16.772068  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:17.150581  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:17.165529  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:17.165907  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:17.270416  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:17.365088  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:17.650170  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:17.665176  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:09:17.665542  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:17.771434  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:18.151047  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:18.232586  110282 kapi.go:107] duration metric: took 1m2.571261706s to wait for kubernetes.io/minikube-addons=registry ...
	I1028 17:09:18.232964  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:18.336202  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:18.649906  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:18.666599  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:18.832504  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:19.151407  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:19.232414  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:19.331872  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:19.434388  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:19.649861  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:19.665502  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:19.770926  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:20.150214  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:20.166116  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:20.270511  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:20.650208  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:20.666219  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:20.770561  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:21.150163  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:21.165564  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:21.271396  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:21.650013  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:21.665229  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:21.771025  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:21.864379  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:22.150736  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:22.166182  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:22.270335  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:22.650262  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:22.665382  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:22.771088  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:23.186257  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:23.187099  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:23.291096  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:23.650663  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:23.664923  110282 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:23.769956  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:23.864581  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:24.150532  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:24.166506  110282 kapi.go:107] duration metric: took 1m8.505173263s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1028 17:09:24.276332  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:24.651561  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:24.832322  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:25.150352  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:25.271587  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:25.650283  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:25.772081  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:25.865108  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:26.150447  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:26.271363  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:26.650537  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:26.769940  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:27.149898  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:27.271080  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:27.650700  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:27.770629  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:28.150687  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:28.270280  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:28.364649  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:28.649693  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:28.770447  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:29.149680  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:29.270476  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:29.650355  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:29.771431  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:30.150319  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:30.271479  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:30.364948  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:30.650091  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:30.773330  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:31.151540  110282 kapi.go:107] duration metric: took 1m11.505006517s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1028 17:09:31.153288  110282 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-803184 cluster.
	I1028 17:09:31.154690  110282 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1028 17:09:31.156335  110282 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1028 17:09:31.269695  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:31.771002  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:32.270697  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:32.365206  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:32.770626  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:33.270128  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:33.773103  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:34.271194  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:34.365242  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:34.770899  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:35.271158  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:35.771910  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:36.271050  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:36.770823  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:36.864611  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:37.271020  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:37.770744  110282 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:38.271237  110282 kapi.go:107] duration metric: took 1m21.505202879s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1028 17:09:38.273097  110282 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1028 17:09:38.274635  110282 addons.go:510] duration metric: took 1m28.23993761s for enable addons: enabled=[storage-provisioner cloud-spanner ingress-dns nvidia-device-plugin amd-gpu-device-plugin inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1028 17:09:39.364982  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:41.864957  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:44.365046  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:46.864952  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:49.364524  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:51.864961  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:54.364138  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:56.364966  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:58.864722  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:10:01.365015  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:10:03.864227  110282 pod_ready.go:103] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"False"
	I1028 17:10:04.364848  110282 pod_ready.go:93] pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace has status "Ready":"True"
	I1028 17:10:04.364876  110282 pod_ready.go:82] duration metric: took 1m0.50628719s for pod "metrics-server-84c5f94fbc-674zg" in "kube-system" namespace to be "Ready" ...
	I1028 17:10:04.364891  110282 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-z7q9t" in "kube-system" namespace to be "Ready" ...
	I1028 17:10:04.369378  110282 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-z7q9t" in "kube-system" namespace has status "Ready":"True"
	I1028 17:10:04.369402  110282 pod_ready.go:82] duration metric: took 4.503001ms for pod "nvidia-device-plugin-daemonset-z7q9t" in "kube-system" namespace to be "Ready" ...
	I1028 17:10:04.369424  110282 pod_ready.go:39] duration metric: took 1m40.324322498s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:10:04.369447  110282 api_server.go:52] waiting for apiserver process to appear ...
	I1028 17:10:04.369485  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 17:10:04.369563  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 17:10:04.405890  110282 cri.go:89] found id: "3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f"
	I1028 17:10:04.405917  110282 cri.go:89] found id: ""
	I1028 17:10:04.405927  110282 logs.go:282] 1 containers: [3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f]
	I1028 17:10:04.405981  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:04.409531  110282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 17:10:04.409604  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 17:10:04.448147  110282 cri.go:89] found id: "73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4"
	I1028 17:10:04.448173  110282 cri.go:89] found id: ""
	I1028 17:10:04.448181  110282 logs.go:282] 1 containers: [73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4]
	I1028 17:10:04.448227  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:04.451666  110282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 17:10:04.451728  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 17:10:04.486717  110282 cri.go:89] found id: "e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d"
	I1028 17:10:04.486738  110282 cri.go:89] found id: ""
	I1028 17:10:04.486746  110282 logs.go:282] 1 containers: [e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d]
	I1028 17:10:04.486800  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:04.490300  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 17:10:04.490359  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 17:10:04.522706  110282 cri.go:89] found id: "435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c"
	I1028 17:10:04.522735  110282 cri.go:89] found id: ""
	I1028 17:10:04.522744  110282 logs.go:282] 1 containers: [435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c]
	I1028 17:10:04.522805  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:04.526174  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 17:10:04.526242  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 17:10:04.561918  110282 cri.go:89] found id: "623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d"
	I1028 17:10:04.561942  110282 cri.go:89] found id: ""
	I1028 17:10:04.561952  110282 logs.go:282] 1 containers: [623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d]
	I1028 17:10:04.562009  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:04.565636  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 17:10:04.565700  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 17:10:04.599837  110282 cri.go:89] found id: "e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1"
	I1028 17:10:04.599874  110282 cri.go:89] found id: ""
	I1028 17:10:04.599885  110282 logs.go:282] 1 containers: [e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1]
	I1028 17:10:04.599953  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:04.603442  110282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 17:10:04.603500  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 17:10:04.636798  110282 cri.go:89] found id: "6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc"
	I1028 17:10:04.636826  110282 cri.go:89] found id: ""
	I1028 17:10:04.636835  110282 logs.go:282] 1 containers: [6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc]
	I1028 17:10:04.636893  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:04.640521  110282 logs.go:123] Gathering logs for kube-controller-manager [e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1] ...
	I1028 17:10:04.640558  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1"
	I1028 17:10:04.706235  110282 logs.go:123] Gathering logs for container status ...
	I1028 17:10:04.706278  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 17:10:04.753427  110282 logs.go:123] Gathering logs for kubelet ...
	I1028 17:10:04.753462  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 17:10:04.814201  110282 logs.go:138] Found kubelet problem: Oct 28 17:08:23 addons-803184 kubelet[1630]: W1028 17:08:23.628918    1630 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-803184" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-803184' and this object
	W1028 17:10:04.814376  110282 logs.go:138] Found kubelet problem: Oct 28 17:08:23 addons-803184 kubelet[1630]: E1028 17:08:23.628981    1630 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-803184\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-803184' and this object" logger="UnhandledError"
	I1028 17:10:04.841985  110282 logs.go:123] Gathering logs for etcd [73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4] ...
	I1028 17:10:04.842029  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4"
	I1028 17:10:04.885980  110282 logs.go:123] Gathering logs for coredns [e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d] ...
	I1028 17:10:04.886020  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d"
	I1028 17:10:04.921279  110282 logs.go:123] Gathering logs for kube-scheduler [435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c] ...
	I1028 17:10:04.921311  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c"
	I1028 17:10:04.960982  110282 logs.go:123] Gathering logs for kube-proxy [623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d] ...
	I1028 17:10:04.961019  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d"
	I1028 17:10:04.994269  110282 logs.go:123] Gathering logs for dmesg ...
	I1028 17:10:04.994301  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 17:10:05.010012  110282 logs.go:123] Gathering logs for describe nodes ...
	I1028 17:10:05.010049  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 17:10:05.109317  110282 logs.go:123] Gathering logs for kube-apiserver [3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f] ...
	I1028 17:10:05.109352  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f"
	I1028 17:10:05.152289  110282 logs.go:123] Gathering logs for kindnet [6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc] ...
	I1028 17:10:05.152332  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc"
	I1028 17:10:05.188911  110282 logs.go:123] Gathering logs for CRI-O ...
	I1028 17:10:05.188947  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 17:10:05.268596  110282 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:05.268635  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 17:10:05.268738  110282 out.go:270] X Problems detected in kubelet:
	W1028 17:10:05.268759  110282 out.go:270]   Oct 28 17:08:23 addons-803184 kubelet[1630]: W1028 17:08:23.628918    1630 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-803184" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-803184' and this object
	W1028 17:10:05.268772  110282 out.go:270]   Oct 28 17:08:23 addons-803184 kubelet[1630]: E1028 17:08:23.628981    1630 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-803184\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-803184' and this object" logger="UnhandledError"
	I1028 17:10:05.268788  110282 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:05.268800  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:10:15.269390  110282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:10:15.283609  110282 api_server.go:72] duration metric: took 2m5.24908153s to wait for apiserver process to appear ...
	I1028 17:10:15.283644  110282 api_server.go:88] waiting for apiserver healthz status ...
	I1028 17:10:15.283685  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 17:10:15.283736  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 17:10:15.316858  110282 cri.go:89] found id: "3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f"
	I1028 17:10:15.316884  110282 cri.go:89] found id: ""
	I1028 17:10:15.316892  110282 logs.go:282] 1 containers: [3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f]
	I1028 17:10:15.316948  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:15.320309  110282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 17:10:15.320368  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 17:10:15.353440  110282 cri.go:89] found id: "73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4"
	I1028 17:10:15.353462  110282 cri.go:89] found id: ""
	I1028 17:10:15.353470  110282 logs.go:282] 1 containers: [73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4]
	I1028 17:10:15.353520  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:15.356974  110282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 17:10:15.357068  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 17:10:15.390721  110282 cri.go:89] found id: "e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d"
	I1028 17:10:15.390750  110282 cri.go:89] found id: ""
	I1028 17:10:15.390762  110282 logs.go:282] 1 containers: [e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d]
	I1028 17:10:15.390824  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:15.394234  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 17:10:15.394299  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 17:10:15.428340  110282 cri.go:89] found id: "435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c"
	I1028 17:10:15.428358  110282 cri.go:89] found id: ""
	I1028 17:10:15.428367  110282 logs.go:282] 1 containers: [435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c]
	I1028 17:10:15.428412  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:15.431826  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 17:10:15.431911  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 17:10:15.467161  110282 cri.go:89] found id: "623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d"
	I1028 17:10:15.467197  110282 cri.go:89] found id: ""
	I1028 17:10:15.467207  110282 logs.go:282] 1 containers: [623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d]
	I1028 17:10:15.467263  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:15.470856  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 17:10:15.470921  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 17:10:15.506667  110282 cri.go:89] found id: "e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1"
	I1028 17:10:15.506693  110282 cri.go:89] found id: ""
	I1028 17:10:15.506706  110282 logs.go:282] 1 containers: [e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1]
	I1028 17:10:15.506766  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:15.510174  110282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 17:10:15.510249  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 17:10:15.546325  110282 cri.go:89] found id: "6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc"
	I1028 17:10:15.546347  110282 cri.go:89] found id: ""
	I1028 17:10:15.546355  110282 logs.go:282] 1 containers: [6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc]
	I1028 17:10:15.546397  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:15.549866  110282 logs.go:123] Gathering logs for coredns [e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d] ...
	I1028 17:10:15.549896  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d"
	I1028 17:10:15.584618  110282 logs.go:123] Gathering logs for kube-proxy [623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d] ...
	I1028 17:10:15.584651  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d"
	I1028 17:10:15.617522  110282 logs.go:123] Gathering logs for kindnet [6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc] ...
	I1028 17:10:15.617554  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc"
	I1028 17:10:15.653068  110282 logs.go:123] Gathering logs for kubelet ...
	I1028 17:10:15.653105  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 17:10:15.705535  110282 logs.go:138] Found kubelet problem: Oct 28 17:08:23 addons-803184 kubelet[1630]: W1028 17:08:23.628918    1630 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-803184" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-803184' and this object
	W1028 17:10:15.705713  110282 logs.go:138] Found kubelet problem: Oct 28 17:08:23 addons-803184 kubelet[1630]: E1028 17:08:23.628981    1630 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-803184\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-803184' and this object" logger="UnhandledError"
	I1028 17:10:15.733544  110282 logs.go:123] Gathering logs for dmesg ...
	I1028 17:10:15.733592  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 17:10:15.751343  110282 logs.go:123] Gathering logs for describe nodes ...
	I1028 17:10:15.751382  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 17:10:15.888048  110282 logs.go:123] Gathering logs for kube-apiserver [3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f] ...
	I1028 17:10:15.888083  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f"
	I1028 17:10:15.931670  110282 logs.go:123] Gathering logs for etcd [73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4] ...
	I1028 17:10:15.931708  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4"
	I1028 17:10:15.974756  110282 logs.go:123] Gathering logs for CRI-O ...
	I1028 17:10:15.974789  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 17:10:16.048818  110282 logs.go:123] Gathering logs for container status ...
	I1028 17:10:16.048861  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 17:10:16.089724  110282 logs.go:123] Gathering logs for kube-scheduler [435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c] ...
	I1028 17:10:16.089759  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c"
	I1028 17:10:16.128936  110282 logs.go:123] Gathering logs for kube-controller-manager [e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1] ...
	I1028 17:10:16.128978  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1"
	I1028 17:10:16.185889  110282 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:16.185929  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 17:10:16.185995  110282 out.go:270] X Problems detected in kubelet:
	W1028 17:10:16.186009  110282 out.go:270]   Oct 28 17:08:23 addons-803184 kubelet[1630]: W1028 17:08:23.628918    1630 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-803184" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-803184' and this object
	W1028 17:10:16.186017  110282 out.go:270]   Oct 28 17:08:23 addons-803184 kubelet[1630]: E1028 17:08:23.628981    1630 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-803184\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-803184' and this object" logger="UnhandledError"
	I1028 17:10:16.186028  110282 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:16.186033  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:10:26.186741  110282 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1028 17:10:26.190858  110282 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1028 17:10:26.191838  110282 api_server.go:141] control plane version: v1.31.2
	I1028 17:10:26.191865  110282 api_server.go:131] duration metric: took 10.908213353s to wait for apiserver health ...
	I1028 17:10:26.191873  110282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 17:10:26.191894  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 17:10:26.191948  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 17:10:26.225575  110282 cri.go:89] found id: "3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f"
	I1028 17:10:26.225616  110282 cri.go:89] found id: ""
	I1028 17:10:26.225627  110282 logs.go:282] 1 containers: [3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f]
	I1028 17:10:26.225689  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:26.229192  110282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 17:10:26.229255  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 17:10:26.262556  110282 cri.go:89] found id: "73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4"
	I1028 17:10:26.262580  110282 cri.go:89] found id: ""
	I1028 17:10:26.262589  110282 logs.go:282] 1 containers: [73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4]
	I1028 17:10:26.262647  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:26.266736  110282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 17:10:26.266812  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 17:10:26.300967  110282 cri.go:89] found id: "e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d"
	I1028 17:10:26.300989  110282 cri.go:89] found id: ""
	I1028 17:10:26.300997  110282 logs.go:282] 1 containers: [e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d]
	I1028 17:10:26.301063  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:26.304956  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 17:10:26.305053  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 17:10:26.339588  110282 cri.go:89] found id: "435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c"
	I1028 17:10:26.339611  110282 cri.go:89] found id: ""
	I1028 17:10:26.339620  110282 logs.go:282] 1 containers: [435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c]
	I1028 17:10:26.339676  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:26.343202  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 17:10:26.343272  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 17:10:26.376768  110282 cri.go:89] found id: "623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d"
	I1028 17:10:26.376795  110282 cri.go:89] found id: ""
	I1028 17:10:26.376806  110282 logs.go:282] 1 containers: [623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d]
	I1028 17:10:26.376867  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:26.380729  110282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 17:10:26.380810  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 17:10:26.416048  110282 cri.go:89] found id: "e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1"
	I1028 17:10:26.416071  110282 cri.go:89] found id: ""
	I1028 17:10:26.416079  110282 logs.go:282] 1 containers: [e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1]
	I1028 17:10:26.416122  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:26.419553  110282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 17:10:26.419629  110282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 17:10:26.453985  110282 cri.go:89] found id: "6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc"
	I1028 17:10:26.454007  110282 cri.go:89] found id: ""
	I1028 17:10:26.454014  110282 logs.go:282] 1 containers: [6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc]
	I1028 17:10:26.454069  110282 ssh_runner.go:195] Run: which crictl
	I1028 17:10:26.457409  110282 logs.go:123] Gathering logs for coredns [e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d] ...
	I1028 17:10:26.457438  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d"
	I1028 17:10:26.491555  110282 logs.go:123] Gathering logs for kube-controller-manager [e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1] ...
	I1028 17:10:26.491584  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1"
	I1028 17:10:26.546664  110282 logs.go:123] Gathering logs for kindnet [6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc] ...
	I1028 17:10:26.546736  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc"
	I1028 17:10:26.581345  110282 logs.go:123] Gathering logs for container status ...
	I1028 17:10:26.581377  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 17:10:26.623141  110282 logs.go:123] Gathering logs for kube-scheduler [435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c] ...
	I1028 17:10:26.623176  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c"
	I1028 17:10:26.664601  110282 logs.go:123] Gathering logs for kube-proxy [623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d] ...
	I1028 17:10:26.664638  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d"
	I1028 17:10:26.698284  110282 logs.go:123] Gathering logs for CRI-O ...
	I1028 17:10:26.698323  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 17:10:26.777242  110282 logs.go:123] Gathering logs for kubelet ...
	I1028 17:10:26.777287  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 17:10:26.829071  110282 logs.go:138] Found kubelet problem: Oct 28 17:08:23 addons-803184 kubelet[1630]: W1028 17:08:23.628918    1630 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-803184" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-803184' and this object
	W1028 17:10:26.829254  110282 logs.go:138] Found kubelet problem: Oct 28 17:08:23 addons-803184 kubelet[1630]: E1028 17:08:23.628981    1630 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-803184\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-803184' and this object" logger="UnhandledError"
	I1028 17:10:26.858029  110282 logs.go:123] Gathering logs for dmesg ...
	I1028 17:10:26.858071  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 17:10:26.875632  110282 logs.go:123] Gathering logs for describe nodes ...
	I1028 17:10:26.875675  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 17:10:26.975563  110282 logs.go:123] Gathering logs for kube-apiserver [3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f] ...
	I1028 17:10:26.975603  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f"
	I1028 17:10:27.019555  110282 logs.go:123] Gathering logs for etcd [73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4] ...
	I1028 17:10:27.019591  110282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4"
	I1028 17:10:27.066582  110282 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:27.066615  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 17:10:27.066676  110282 out.go:270] X Problems detected in kubelet:
	W1028 17:10:27.066689  110282 out.go:270]   Oct 28 17:08:23 addons-803184 kubelet[1630]: W1028 17:08:23.628918    1630 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-803184" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-803184' and this object
	W1028 17:10:27.066698  110282 out.go:270]   Oct 28 17:08:23 addons-803184 kubelet[1630]: E1028 17:08:23.628981    1630 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-803184\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-803184' and this object" logger="UnhandledError"
	I1028 17:10:27.066711  110282 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:27.066716  110282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:10:37.079264  110282 system_pods.go:59] 19 kube-system pods found
	I1028 17:10:37.079305  110282 system_pods.go:61] "amd-gpu-device-plugin-jhlpw" [f711d106-eb63-4b6b-8661-25cd70f4f3b1] Running
	I1028 17:10:37.079314  110282 system_pods.go:61] "coredns-7c65d6cfc9-mc8s8" [9f5e6a87-7e82-49cc-bea9-1975bf9e65dd] Running
	I1028 17:10:37.079320  110282 system_pods.go:61] "csi-hostpath-attacher-0" [da3d6207-32ff-44d2-b0ee-df10b36350ac] Running
	I1028 17:10:37.079326  110282 system_pods.go:61] "csi-hostpath-resizer-0" [7e68a1f2-b307-486c-ac3b-4c103de4e95c] Running
	I1028 17:10:37.079354  110282 system_pods.go:61] "csi-hostpathplugin-728fs" [d161e22d-e638-413a-aa21-c02a59e7f793] Running
	I1028 17:10:37.079360  110282 system_pods.go:61] "etcd-addons-803184" [a95b6003-b239-4852-b463-0fff9cd0f206] Running
	I1028 17:10:37.079365  110282 system_pods.go:61] "kindnet-hj2qh" [32e72145-ef94-4e95-b3f8-99108d471a86] Running
	I1028 17:10:37.079371  110282 system_pods.go:61] "kube-apiserver-addons-803184" [39798403-bdbb-47d9-89c1-768e79344f2b] Running
	I1028 17:10:37.079377  110282 system_pods.go:61] "kube-controller-manager-addons-803184" [11d701f0-9111-4089-8635-652492fc24a3] Running
	I1028 17:10:37.079384  110282 system_pods.go:61] "kube-ingress-dns-minikube" [079c2ef4-da73-455a-90bb-fe1a00f5ef5d] Running
	I1028 17:10:37.079401  110282 system_pods.go:61] "kube-proxy-rlsxn" [c8571a1a-da60-4e3a-80b6-4739a4f2b0d7] Running
	I1028 17:10:37.079407  110282 system_pods.go:61] "kube-scheduler-addons-803184" [8b63e9e0-0128-44b5-8ca5-f90c9ea46b5e] Running
	I1028 17:10:37.079414  110282 system_pods.go:61] "metrics-server-84c5f94fbc-674zg" [37927340-66ab-4951-bd4b-59b0e0d01812] Running
	I1028 17:10:37.079422  110282 system_pods.go:61] "nvidia-device-plugin-daemonset-z7q9t" [29592f17-9aa8-4d19-b8d1-dcb2278980ef] Running
	I1028 17:10:37.079428  110282 system_pods.go:61] "registry-66c9cd494c-67lgb" [9af05f14-ce81-44bb-97d1-37dedf7c187c] Running
	I1028 17:10:37.079434  110282 system_pods.go:61] "registry-proxy-nbdps" [cd42d863-c294-464d-b7cd-95396c429181] Running
	I1028 17:10:37.079440  110282 system_pods.go:61] "snapshot-controller-56fcc65765-cdh9r" [865ae345-a0e7-417d-9c96-5544c2832d7e] Running
	I1028 17:10:37.079447  110282 system_pods.go:61] "snapshot-controller-56fcc65765-vwwxr" [dc9caf29-d5cc-4123-96ee-d69a2da2e706] Running
	I1028 17:10:37.079453  110282 system_pods.go:61] "storage-provisioner" [a0b9c49a-8d86-4f02-84fb-10f963133047] Running
	I1028 17:10:37.079460  110282 system_pods.go:74] duration metric: took 10.887580531s to wait for pod list to return data ...
	I1028 17:10:37.079477  110282 default_sa.go:34] waiting for default service account to be created ...
	I1028 17:10:37.082069  110282 default_sa.go:45] found service account: "default"
	I1028 17:10:37.082097  110282 default_sa.go:55] duration metric: took 2.612904ms for default service account to be created ...
	I1028 17:10:37.082115  110282 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 17:10:37.090408  110282 system_pods.go:86] 19 kube-system pods found
	I1028 17:10:37.090450  110282 system_pods.go:89] "amd-gpu-device-plugin-jhlpw" [f711d106-eb63-4b6b-8661-25cd70f4f3b1] Running
	I1028 17:10:37.090459  110282 system_pods.go:89] "coredns-7c65d6cfc9-mc8s8" [9f5e6a87-7e82-49cc-bea9-1975bf9e65dd] Running
	I1028 17:10:37.090465  110282 system_pods.go:89] "csi-hostpath-attacher-0" [da3d6207-32ff-44d2-b0ee-df10b36350ac] Running
	I1028 17:10:37.090470  110282 system_pods.go:89] "csi-hostpath-resizer-0" [7e68a1f2-b307-486c-ac3b-4c103de4e95c] Running
	I1028 17:10:37.090476  110282 system_pods.go:89] "csi-hostpathplugin-728fs" [d161e22d-e638-413a-aa21-c02a59e7f793] Running
	I1028 17:10:37.090481  110282 system_pods.go:89] "etcd-addons-803184" [a95b6003-b239-4852-b463-0fff9cd0f206] Running
	I1028 17:10:37.090486  110282 system_pods.go:89] "kindnet-hj2qh" [32e72145-ef94-4e95-b3f8-99108d471a86] Running
	I1028 17:10:37.090493  110282 system_pods.go:89] "kube-apiserver-addons-803184" [39798403-bdbb-47d9-89c1-768e79344f2b] Running
	I1028 17:10:37.090499  110282 system_pods.go:89] "kube-controller-manager-addons-803184" [11d701f0-9111-4089-8635-652492fc24a3] Running
	I1028 17:10:37.090507  110282 system_pods.go:89] "kube-ingress-dns-minikube" [079c2ef4-da73-455a-90bb-fe1a00f5ef5d] Running
	I1028 17:10:37.090512  110282 system_pods.go:89] "kube-proxy-rlsxn" [c8571a1a-da60-4e3a-80b6-4739a4f2b0d7] Running
	I1028 17:10:37.090519  110282 system_pods.go:89] "kube-scheduler-addons-803184" [8b63e9e0-0128-44b5-8ca5-f90c9ea46b5e] Running
	I1028 17:10:37.090533  110282 system_pods.go:89] "metrics-server-84c5f94fbc-674zg" [37927340-66ab-4951-bd4b-59b0e0d01812] Running
	I1028 17:10:37.090544  110282 system_pods.go:89] "nvidia-device-plugin-daemonset-z7q9t" [29592f17-9aa8-4d19-b8d1-dcb2278980ef] Running
	I1028 17:10:37.090554  110282 system_pods.go:89] "registry-66c9cd494c-67lgb" [9af05f14-ce81-44bb-97d1-37dedf7c187c] Running
	I1028 17:10:37.090561  110282 system_pods.go:89] "registry-proxy-nbdps" [cd42d863-c294-464d-b7cd-95396c429181] Running
	I1028 17:10:37.090568  110282 system_pods.go:89] "snapshot-controller-56fcc65765-cdh9r" [865ae345-a0e7-417d-9c96-5544c2832d7e] Running
	I1028 17:10:37.090575  110282 system_pods.go:89] "snapshot-controller-56fcc65765-vwwxr" [dc9caf29-d5cc-4123-96ee-d69a2da2e706] Running
	I1028 17:10:37.090583  110282 system_pods.go:89] "storage-provisioner" [a0b9c49a-8d86-4f02-84fb-10f963133047] Running
	I1028 17:10:37.090595  110282 system_pods.go:126] duration metric: took 8.471887ms to wait for k8s-apps to be running ...
	I1028 17:10:37.090608  110282 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 17:10:37.090676  110282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:10:37.102389  110282 system_svc.go:56] duration metric: took 11.771068ms WaitForService to wait for kubelet
	I1028 17:10:37.102427  110282 kubeadm.go:582] duration metric: took 2m27.067901172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:10:37.102457  110282 node_conditions.go:102] verifying NodePressure condition ...
	I1028 17:10:37.105299  110282 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1028 17:10:37.105336  110282 node_conditions.go:123] node cpu capacity is 8
	I1028 17:10:37.105351  110282 node_conditions.go:105] duration metric: took 2.888292ms to run NodePressure ...
	I1028 17:10:37.105364  110282 start.go:241] waiting for startup goroutines ...
	I1028 17:10:37.105371  110282 start.go:246] waiting for cluster config update ...
	I1028 17:10:37.105388  110282 start.go:255] writing updated cluster config ...
	I1028 17:10:37.105681  110282 ssh_runner.go:195] Run: rm -f paused
	I1028 17:10:37.154349  110282 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 17:10:37.156332  110282 out.go:177] * Done! kubectl is now configured to use "addons-803184" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 17:13:57 addons-803184 crio[1033]: time="2024-10-28 17:13:57.393815505Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-5f85ff4588-qfhpf Namespace:ingress-nginx ID:5f487d7817107e080b0783a898b4a0591431c819a9822771e2a78812613c7d2b UID:8eb08805-562d-4a29-844b-3e3c6ff89c31 NetNS:/var/run/netns/681475d4-0e98-430c-9225-e5f850ee3ec1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 28 17:13:57 addons-803184 crio[1033]: time="2024-10-28 17:13:57.393927395Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-5f85ff4588-qfhpf from CNI network \"kindnet\" (type=ptp)"
	Oct 28 17:13:57 addons-803184 crio[1033]: time="2024-10-28 17:13:57.433700439Z" level=info msg="Stopped pod sandbox: 5f487d7817107e080b0783a898b4a0591431c819a9822771e2a78812613c7d2b" id=7c4a528d-3ad8-40e8-8fd0-cf7617320b14 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 17:13:57 addons-803184 crio[1033]: time="2024-10-28 17:13:57.721245579Z" level=info msg="Removing container: ec97e84f5438f8fe968ea983f56cba575939154574724dc7e43e85b5d3abb67a" id=a179b69b-7d76-4842-afa5-88617b3892f5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 28 17:13:57 addons-803184 crio[1033]: time="2024-10-28 17:13:57.733391628Z" level=info msg="Removed container ec97e84f5438f8fe968ea983f56cba575939154574724dc7e43e85b5d3abb67a: ingress-nginx/ingress-nginx-controller-5f85ff4588-qfhpf/controller" id=a179b69b-7d76-4842-afa5-88617b3892f5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.644741247Z" level=info msg="Removing container: 4edb24fd79a1cca20f78ed7f0f2edefe65db55df817c22cd36e903b20fe4c0f4" id=b33423d9-7072-4aa1-b9cc-152cc591ee57 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.656873839Z" level=info msg="Removed container 4edb24fd79a1cca20f78ed7f0f2edefe65db55df817c22cd36e903b20fe4c0f4: ingress-nginx/ingress-nginx-admission-create-xqrnj/create" id=b33423d9-7072-4aa1-b9cc-152cc591ee57 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.658185281Z" level=info msg="Removing container: 236dccdf82ebc0c5b7feb82eebb7fbf06dc51480ee9176642f8d068b25af8e04" id=696f2d99-2484-4f2d-811e-3ff0a4326d98 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.672482474Z" level=info msg="Removed container 236dccdf82ebc0c5b7feb82eebb7fbf06dc51480ee9176642f8d068b25af8e04: ingress-nginx/ingress-nginx-admission-patch-prp8k/patch" id=696f2d99-2484-4f2d-811e-3ff0a4326d98 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.673897243Z" level=info msg="Stopping pod sandbox: fe9f57301d9e5fbc43eb703a0028b259421619decb38c62a41a9b32efd2fcfa6" id=08ee07c5-7fcc-4722-9de6-6eb3b269d97d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.673931857Z" level=info msg="Stopped pod sandbox (already stopped): fe9f57301d9e5fbc43eb703a0028b259421619decb38c62a41a9b32efd2fcfa6" id=08ee07c5-7fcc-4722-9de6-6eb3b269d97d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.674156862Z" level=info msg="Removing pod sandbox: fe9f57301d9e5fbc43eb703a0028b259421619decb38c62a41a9b32efd2fcfa6" id=23b4f11a-d154-4303-b1f1-1b2b02f32277 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.680218612Z" level=info msg="Removed pod sandbox: fe9f57301d9e5fbc43eb703a0028b259421619decb38c62a41a9b32efd2fcfa6" id=23b4f11a-d154-4303-b1f1-1b2b02f32277 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.680643185Z" level=info msg="Stopping pod sandbox: 5f487d7817107e080b0783a898b4a0591431c819a9822771e2a78812613c7d2b" id=0a290134-f113-435e-8c6d-67ab714a9845 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.680677984Z" level=info msg="Stopped pod sandbox (already stopped): 5f487d7817107e080b0783a898b4a0591431c819a9822771e2a78812613c7d2b" id=0a290134-f113-435e-8c6d-67ab714a9845 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.680966152Z" level=info msg="Removing pod sandbox: 5f487d7817107e080b0783a898b4a0591431c819a9822771e2a78812613c7d2b" id=f2a1738e-4f42-4988-bbc0-2282066e1c11 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.686357400Z" level=info msg="Removed pod sandbox: 5f487d7817107e080b0783a898b4a0591431c819a9822771e2a78812613c7d2b" id=f2a1738e-4f42-4988-bbc0-2282066e1c11 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.686772034Z" level=info msg="Stopping pod sandbox: ed5b57735d1d6f9cffc570f035d02f6072320a8a11e529c9ac7cea08e7337c6b" id=da7513de-d761-4e1d-8b95-c52696694cea name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.686811261Z" level=info msg="Stopped pod sandbox (already stopped): ed5b57735d1d6f9cffc570f035d02f6072320a8a11e529c9ac7cea08e7337c6b" id=da7513de-d761-4e1d-8b95-c52696694cea name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.687107795Z" level=info msg="Removing pod sandbox: ed5b57735d1d6f9cffc570f035d02f6072320a8a11e529c9ac7cea08e7337c6b" id=1a098fd1-94a1-4034-ae88-082c98b936f9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.693546256Z" level=info msg="Removed pod sandbox: ed5b57735d1d6f9cffc570f035d02f6072320a8a11e529c9ac7cea08e7337c6b" id=1a098fd1-94a1-4034-ae88-082c98b936f9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.693946223Z" level=info msg="Stopping pod sandbox: d5db8fd49eee7ea34330018f1a8bbf2ae5c68633d4184bf786181b0df568fd3c" id=1afa4dc6-8dbc-4da5-a346-43bd11f877ef name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.693990128Z" level=info msg="Stopped pod sandbox (already stopped): d5db8fd49eee7ea34330018f1a8bbf2ae5c68633d4184bf786181b0df568fd3c" id=1afa4dc6-8dbc-4da5-a346-43bd11f877ef name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.694408466Z" level=info msg="Removing pod sandbox: d5db8fd49eee7ea34330018f1a8bbf2ae5c68633d4184bf786181b0df568fd3c" id=3fd98fad-ed00-45dd-992b-e6c213c87260 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 17:14:04 addons-803184 crio[1033]: time="2024-10-28 17:14:04.699895376Z" level=info msg="Removed pod sandbox: d5db8fd49eee7ea34330018f1a8bbf2ae5c68633d4184bf786181b0df568fd3c" id=3fd98fad-ed00-45dd-992b-e6c213c87260 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f756155c77b6c       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   c98e86b156819       hello-world-app-55bf9c44b4-hr2bl
	5fe124ad0d846       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         4 minutes ago       Running             nginx                     0                   a5bf6a2891ba6       nginx
	47487a2d71c3f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   d93d97479c706       busybox
	df81cd12b25ac       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   7 minutes ago       Running             metrics-server            0                   76c2e84ba02fe       metrics-server-84c5f94fbc-674zg
	48009f6110960       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   35c23dc4d7ee0       local-path-provisioner-86d989889c-xsbnr
	812751f5e2a24       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   7497107d3bf1e       storage-provisioner
	e00de5529feb3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        8 minutes ago       Running             coredns                   0                   8ed6236d4db5d       coredns-7c65d6cfc9-mc8s8
	6b73547e89dce       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                                        8 minutes ago       Running             kindnet-cni               0                   f05cc2dabc46b       kindnet-hj2qh
	623595caf3621       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        8 minutes ago       Running             kube-proxy                0                   0e87349546e7b       kube-proxy-rlsxn
	3ae549dfb8f03       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        8 minutes ago       Running             kube-apiserver            0                   376b526902bbf       kube-apiserver-addons-803184
	e146d6b67329b       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        8 minutes ago       Running             kube-controller-manager   0                   042db5d4b535d       kube-controller-manager-addons-803184
	435c4410be526       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        8 minutes ago       Running             kube-scheduler            0                   00173b9182265       kube-scheduler-addons-803184
	73de1b918a7a5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   cac6a432d93bb       etcd-addons-803184
	
	
	==> coredns [e00de5529feb3105a9d5595de30251f65392278793d81aa68646e19b14cbb70d] <==
	[INFO] 10.244.0.20:52760 - 62370 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006049135s
	[INFO] 10.244.0.20:38841 - 33858 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007229085s
	[INFO] 10.244.0.20:52760 - 2859 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007036938s
	[INFO] 10.244.0.20:49612 - 12309 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007284761s
	[INFO] 10.244.0.20:59330 - 33188 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007436762s
	[INFO] 10.244.0.20:50412 - 18069 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007311714s
	[INFO] 10.244.0.20:49057 - 13294 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007560612s
	[INFO] 10.244.0.20:56522 - 43990 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007540693s
	[INFO] 10.244.0.20:56389 - 34881 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007659331s
	[INFO] 10.244.0.20:38841 - 24151 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007288096s
	[INFO] 10.244.0.20:49612 - 29270 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007418688s
	[INFO] 10.244.0.20:52760 - 2940 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007424443s
	[INFO] 10.244.0.20:50412 - 10927 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007663592s
	[INFO] 10.244.0.20:49057 - 29269 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007386529s
	[INFO] 10.244.0.20:56522 - 14679 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007431099s
	[INFO] 10.244.0.20:56389 - 29312 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007394252s
	[INFO] 10.244.0.20:59330 - 18559 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00759662s
	[INFO] 10.244.0.20:49612 - 1719 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000167964s
	[INFO] 10.244.0.20:56522 - 20388 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000066237s
	[INFO] 10.244.0.20:52760 - 23984 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000180563s
	[INFO] 10.244.0.20:49057 - 27894 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00019936s
	[INFO] 10.244.0.20:56389 - 40658 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000177207s
	[INFO] 10.244.0.20:59330 - 8954 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070734s
	[INFO] 10.244.0.20:38841 - 8655 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000374187s
	[INFO] 10.244.0.20:50412 - 38985 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000162218s
	
	
	==> describe nodes <==
	Name:               addons-803184
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-803184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=addons-803184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T17_08_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-803184
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:08:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-803184
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:16:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:14:13 +0000   Mon, 28 Oct 2024 17:08:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:14:13 +0000   Mon, 28 Oct 2024 17:08:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:14:13 +0000   Mon, 28 Oct 2024 17:08:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:14:13 +0000   Mon, 28 Oct 2024 17:08:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-803184
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a2b47af00984e389c68af9dc7a29c31
	  System UUID:                bf2f7dbd-aeea-4147-ba5b-eea51abda43d
	  Boot ID:                    9ca5ee1d-76d3-40f6-894f-a30303f688cc
	  Kernel Version:             5.15.0-1070-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  default                     hello-world-app-55bf9c44b4-hr2bl           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 coredns-7c65d6cfc9-mc8s8                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m22s
	  kube-system                 etcd-addons-803184                         100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m28s
	  kube-system                 kindnet-hj2qh                              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m22s
	  kube-system                 kube-apiserver-addons-803184               250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-controller-manager-addons-803184      200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-proxy-rlsxn                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 kube-scheduler-addons-803184               100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 metrics-server-84c5f94fbc-674zg            100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         8m18s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  local-path-storage          local-path-provisioner-86d989889c-xsbnr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m18s                  kube-proxy       
	  Normal   Starting                 8m33s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m33s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m33s (x8 over 8m33s)  kubelet          Node addons-803184 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m33s (x8 over 8m33s)  kubelet          Node addons-803184 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m33s (x7 over 8m33s)  kubelet          Node addons-803184 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m28s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m28s                  kubelet          Node addons-803184 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m28s                  kubelet          Node addons-803184 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m28s                  kubelet          Node addons-803184 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m23s                  node-controller  Node addons-803184 event: Registered Node addons-803184 in Controller
	  Normal   NodeReady                8m9s                   kubelet          Node addons-803184 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 50 ac a9 60 41 08 06
	[Oct28 16:57] IPv4: martian source 10.244.0.1 from 10.244.0.47, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 8c fc bf 5e 5d 08 06
	[Oct28 16:58] IPv4: martian source 10.244.0.1 from 10.244.0.48, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 ca 3f 1e c5 5a 08 06
	[ +23.638784] IPv4: martian source 10.244.0.1 from 10.244.0.49, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 e8 fb 71 c4 cc 08 06
	[Oct28 16:59] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 e9 c4 bd 3e 0d 08 06
	[ +22.900129] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e 49 91 d3 37 da 08 06
	[Oct28 17:11] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[  +1.015600] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[  +2.015817] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[  +4.127681] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[  +8.195365] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[Oct28 17:12] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[ +32.253574] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	
	
	==> etcd [73de1b918a7a5fd4753b0587e1ada7b31e7e891034c23594c6e9253f52bb77f4] <==
	{"level":"warn","ts":"2024-10-28T17:08:12.733268Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.994814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4096"}
	{"level":"info","ts":"2024-10-28T17:08:12.733319Z","caller":"traceutil/trace.go:171","msg":"trace[1954612271] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:409; }","duration":"195.058487ms","start":"2024-10-28T17:08:12.538250Z","end":"2024-10-28T17:08:12.733308Z","steps":["trace[1954612271] 'agreement among raft nodes before linearized reading'  (duration: 194.958626ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:08:12.740649Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.294141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-10-28T17:08:12.835507Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.286876ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-rlsxn\" ","response":"range_response_count:1 size:4833"}
	{"level":"info","ts":"2024-10-28T17:08:12.838767Z","caller":"traceutil/trace.go:171","msg":"trace[1719793517] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-rlsxn; range_end:; response_count:1; response_revision:415; }","duration":"104.556054ms","start":"2024-10-28T17:08:12.734191Z","end":"2024-10-28T17:08:12.838748Z","steps":["trace[1719793517] 'agreement among raft nodes before linearized reading'  (duration: 101.209093ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:08:12.836884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.439315ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-803184\" ","response":"range_response_count:1 size:5655"}
	{"level":"info","ts":"2024-10-28T17:08:12.839174Z","caller":"traceutil/trace.go:171","msg":"trace[640734739] range","detail":"{range_begin:/registry/minions/addons-803184; range_end:; response_count:1; response_revision:415; }","duration":"104.731509ms","start":"2024-10-28T17:08:12.734427Z","end":"2024-10-28T17:08:12.839158Z","steps":["trace[640734739] 'agreement among raft nodes before linearized reading'  (duration: 102.414378ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:08:12.837072Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.687466ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T17:08:12.839547Z","caller":"traceutil/trace.go:171","msg":"trace[779157548] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:415; }","duration":"105.158019ms","start":"2024-10-28T17:08:12.734377Z","end":"2024-10-28T17:08:12.839535Z","steps":["trace[779157548] 'agreement among raft nodes before linearized reading'  (duration: 102.67492ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:08:12.837112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.903098ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T17:08:12.839931Z","caller":"traceutil/trace.go:171","msg":"trace[2084882310] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:415; }","duration":"105.718598ms","start":"2024-10-28T17:08:12.734201Z","end":"2024-10-28T17:08:12.839920Z","steps":["trace[2084882310] 'agreement among raft nodes before linearized reading'  (duration: 102.890249ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:08:12.842441Z","caller":"traceutil/trace.go:171","msg":"trace[734434833] range","detail":"{range_begin:/registry/daemonsets/kube-system/amd-gpu-device-plugin; range_end:; response_count:0; response_revision:414; }","duration":"291.759872ms","start":"2024-10-28T17:08:12.538333Z","end":"2024-10-28T17:08:12.830093Z","steps":["trace[734434833] 'agreement among raft nodes before linearized reading'  (duration: 202.275165ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:08:12.842512Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T17:08:12.538313Z","time spent":"304.176827ms","remote":"127.0.0.1:44962","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":29,"request content":"key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" "}
	{"level":"info","ts":"2024-10-28T17:08:13.245277Z","caller":"traceutil/trace.go:171","msg":"trace[1499117157] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"101.046866ms","start":"2024-10-28T17:08:13.144211Z","end":"2024-10-28T17:08:13.245258Z","steps":["trace[1499117157] 'process raft request'  (duration: 97.479313ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:08:13.547721Z","caller":"traceutil/trace.go:171","msg":"trace[1692226229] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"115.57696ms","start":"2024-10-28T17:08:13.432124Z","end":"2024-10-28T17:08:13.547701Z","steps":["trace[1692226229] 'process raft request'  (duration: 113.493556ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:08:13.548851Z","caller":"traceutil/trace.go:171","msg":"trace[1209363146] linearizableReadLoop","detail":"{readStateIndex:464; appliedIndex:460; }","duration":"102.101129ms","start":"2024-10-28T17:08:13.446735Z","end":"2024-10-28T17:08:13.548836Z","steps":["trace[1209363146] 'read index received'  (duration: 98.894041ms)","trace[1209363146] 'applied index is now lower than readState.Index'  (duration: 3.206309ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T17:08:13.549109Z","caller":"traceutil/trace.go:171","msg":"trace[13686491] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"114.15528ms","start":"2024-10-28T17:08:13.434942Z","end":"2024-10-28T17:08:13.549098Z","steps":["trace[13686491] 'process raft request'  (duration: 113.680799ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:08:13.549333Z","caller":"traceutil/trace.go:171","msg":"trace[1084945865] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"104.646427ms","start":"2024-10-28T17:08:13.444668Z","end":"2024-10-28T17:08:13.549314Z","steps":["trace[1084945865] 'process raft request'  (duration: 104.018849ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:08:13.549502Z","caller":"traceutil/trace.go:171","msg":"trace[924302981] transaction","detail":"{read_only:false; response_revision:451; number_of_response:1; }","duration":"102.950687ms","start":"2024-10-28T17:08:13.446544Z","end":"2024-10-28T17:08:13.549494Z","steps":["trace[924302981] 'process raft request'  (duration: 102.22335ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:08:13.549749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.001409ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-10-28T17:08:13.549809Z","caller":"traceutil/trace.go:171","msg":"trace[236314811] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:452; }","duration":"103.069302ms","start":"2024-10-28T17:08:13.446730Z","end":"2024-10-28T17:08:13.549799Z","steps":["trace[236314811] 'agreement among raft nodes before linearized reading'  (duration: 102.982414ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:09:22.996485Z","caller":"traceutil/trace.go:171","msg":"trace[1224421849] transaction","detail":"{read_only:false; response_revision:1147; number_of_response:1; }","duration":"113.666334ms","start":"2024-10-28T17:09:22.882799Z","end":"2024-10-28T17:09:22.996466Z","steps":["trace[1224421849] 'process raft request'  (duration: 113.569703ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:09:23.184274Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.888879ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-create-prslp\" ","response":"range_response_count:1 size:3937"}
	{"level":"info","ts":"2024-10-28T17:09:23.184355Z","caller":"traceutil/trace.go:171","msg":"trace[1664509026] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-create-prslp; range_end:; response_count:1; response_revision:1148; }","duration":"129.978957ms","start":"2024-10-28T17:09:23.054360Z","end":"2024-10-28T17:09:23.184339Z","steps":["trace[1664509026] 'range keys from in-memory index tree'  (duration: 129.767424ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:09:33.948521Z","caller":"traceutil/trace.go:171","msg":"trace[1215977858] transaction","detail":"{read_only:false; response_revision:1216; number_of_response:1; }","duration":"156.152471ms","start":"2024-10-28T17:09:33.792344Z","end":"2024-10-28T17:09:33.948496Z","steps":["trace[1215977858] 'process raft request'  (duration: 98.894578ms)","trace[1215977858] 'compare'  (duration: 56.728244ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:16:32 up 58 min,  0 users,  load average: 0.07, 0.64, 0.86
	Linux addons-803184 5.15.0-1070-gcp #78~20.04.1-Ubuntu SMP Wed Oct 9 22:05:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6b73547e89dceeef3fb7c4004a74974df90b4cc2fa98ca9c81951501c292b8fc] <==
	I1028 17:14:23.331938       1 main.go:300] handling current node
	I1028 17:14:33.340001       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:14:33.340041       1 main.go:300] handling current node
	I1028 17:14:43.331933       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:14:43.331970       1 main.go:300] handling current node
	I1028 17:14:53.335009       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:14:53.335053       1 main.go:300] handling current node
	I1028 17:15:03.336484       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:15:03.336525       1 main.go:300] handling current node
	I1028 17:15:13.330865       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:15:13.330904       1 main.go:300] handling current node
	I1028 17:15:23.334431       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:15:23.334466       1 main.go:300] handling current node
	I1028 17:15:33.337806       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:15:33.337847       1 main.go:300] handling current node
	I1028 17:15:43.336192       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:15:43.336287       1 main.go:300] handling current node
	I1028 17:15:53.335350       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:15:53.335385       1 main.go:300] handling current node
	I1028 17:16:03.330672       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:16:03.330730       1 main.go:300] handling current node
	I1028 17:16:13.330825       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:16:13.330877       1 main.go:300] handling current node
	I1028 17:16:23.335167       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:16:23.335219       1 main.go:300] handling current node
	
	
	==> kube-apiserver [3ae549dfb8f0306afc9487cc9c00be12be5b6bc817c8dff896bc3839613df59f] <==
	 > logger="UnhandledError"
	E1028 17:10:04.249071       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.218.115:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.218.115:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.218.115:443: connect: connection refused" logger="UnhandledError"
	I1028 17:10:04.280638       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1028 17:10:47.843425       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52348: use of closed network connection
	E1028 17:10:48.008478       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52380: use of closed network connection
	I1028 17:10:56.981174       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.243.234"}
	I1028 17:11:27.391726       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1028 17:11:28.409463       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1028 17:11:29.660195       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1028 17:11:30.051527       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1028 17:11:30.251109       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.97.116"}
	I1028 17:11:53.210874       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:11:53.211032       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:11:53.225176       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:11:53.225226       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:11:53.227091       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:11:53.227133       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:11:53.239219       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:11:53.239269       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:11:53.253710       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:11:53.253849       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1028 17:11:54.228238       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1028 17:11:54.253957       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1028 17:11:54.360339       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1028 17:13:50.042053       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.240.197"}
	
	
	==> kube-controller-manager [e146d6b67329b03bab25aa2452b2d8ee0b9a6b5cf88ad7c5a9818a2d169b37a1] <==
	E1028 17:14:25.535660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:14:28.518021       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:14:28.518059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:14:39.553633       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:14:39.553673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:14:48.470623       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:14:48.470664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:15:00.038155       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:15:00.038202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:15:17.732293       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:15:17.732348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:15:20.724495       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:15:20.724541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:15:29.960057       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:15:29.960096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:15:31.752850       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:15:31.752893       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:15:51.632368       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:15:51.632415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:15:53.290530       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:15:53.290577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:16:02.508294       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:16:02.508367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:16:15.475598       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:16:15.475645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [623595caf36211b2a546c53f5e64979ddc2d898449f76b651d0ba9add0458a3d] <==
	I1028 17:08:13.236451       1 server_linux.go:66] "Using iptables proxy"
	I1028 17:08:14.144186       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1028 17:08:14.144285       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 17:08:14.441347       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1028 17:08:14.441498       1 server_linux.go:169] "Using iptables Proxier"
	I1028 17:08:14.444289       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 17:08:14.444876       1 server.go:483] "Version info" version="v1.31.2"
	I1028 17:08:14.445171       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 17:08:14.446715       1 config.go:199] "Starting service config controller"
	I1028 17:08:14.446793       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 17:08:14.446852       1 config.go:105] "Starting endpoint slice config controller"
	I1028 17:08:14.448135       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 17:08:14.446891       1 config.go:328] "Starting node config controller"
	I1028 17:08:14.448241       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 17:08:14.547558       1 shared_informer.go:320] Caches are synced for service config
	I1028 17:08:14.549043       1 shared_informer.go:320] Caches are synced for node config
	I1028 17:08:14.549193       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [435c4410be52649f132de011f591feff06d668e561a7d54bd1eab1d252e3341c] <==
	W1028 17:08:02.144871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 17:08:02.144889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:02.144934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 17:08:02.144956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:02.950479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 17:08:02.950523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:02.965016       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 17:08:02.965052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:03.015547       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 17:08:03.015590       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 17:08:03.015635       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 17:08:03.015667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:03.022111       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 17:08:03.022157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:03.076513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 17:08:03.076562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:03.076513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 17:08:03.076605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:03.079760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 17:08:03.079811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:03.151940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 17:08:03.151982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:08:03.179325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 17:08:03.179370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1028 17:08:04.741621       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 17:14:34 addons-803184 kubelet[1630]: E1028 17:14:34.613623    1630 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135674613428756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:14:34 addons-803184 kubelet[1630]: E1028 17:14:34.613670    1630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135674613428756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:14:44 addons-803184 kubelet[1630]: E1028 17:14:44.615961    1630 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135684615680320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:14:44 addons-803184 kubelet[1630]: E1028 17:14:44.616004    1630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135684615680320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:14:54 addons-803184 kubelet[1630]: E1028 17:14:54.618973    1630 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135694618685173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:14:54 addons-803184 kubelet[1630]: E1028 17:14:54.619030    1630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135694618685173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:15:04 addons-803184 kubelet[1630]: E1028 17:15:04.621797    1630 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135704621574061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:15:04 addons-803184 kubelet[1630]: E1028 17:15:04.621832    1630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135704621574061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:15:14 addons-803184 kubelet[1630]: E1028 17:15:14.624461    1630 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135714624265675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:15:14 addons-803184 kubelet[1630]: E1028 17:15:14.624496    1630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135714624265675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:15:24 addons-803184 kubelet[1630]: E1028 17:15:24.627164    1630 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135724626898405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:15:24 addons-803184 kubelet[1630]: E1028 17:15:24.627199    1630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135724626898405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:15:34 addons-803184 kubelet[1630]: E1028 17:15:34.629008    1630 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135734628798549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:15:34 addons-803184 kubelet[1630]: E1028 17:15:34.629044    1630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135734628798549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:15:44 addons-803184 kubelet[1630]: E1028 17:15:44.631534    1630 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135744631273627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:15:44 addons-803184 kubelet[1630]: E1028 17:15:44.631582    1630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135744631273627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:15:48 addons-803184 kubelet[1630]: I1028 17:15:48.437134    1630 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 28 17:15:54 addons-803184 kubelet[1630]: E1028 17:15:54.633399    1630 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135754633212565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:15:54 addons-803184 kubelet[1630]: E1028 17:15:54.633433    1630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135754633212565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:04 addons-803184 kubelet[1630]: E1028 17:16:04.635327    1630 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135764635113335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:04 addons-803184 kubelet[1630]: E1028 17:16:04.635368    1630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135764635113335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:14 addons-803184 kubelet[1630]: E1028 17:16:14.638141    1630 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135774637915026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:14 addons-803184 kubelet[1630]: E1028 17:16:14.638186    1630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135774637915026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:24 addons-803184 kubelet[1630]: E1028 17:16:24.640338    1630 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135784640092562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:24 addons-803184 kubelet[1630]: E1028 17:16:24.640381    1630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135784640092562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [812751f5e2a247ec37efb705c3eae0e2c65a9209dce8df8470218cf396718428] <==
	I1028 17:08:24.837029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 17:08:24.846747       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 17:08:24.846822       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 17:08:24.856451       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 17:08:24.856505       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"98a4a8f4-6ecd-4758-8640-ac8d02da712d", APIVersion:"v1", ResourceVersion:"926", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-803184_e9eb44be-1eca-44fd-a052-5a56aabaeb8b became leader
	I1028 17:08:24.856647       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-803184_e9eb44be-1eca-44fd-a052-5a56aabaeb8b!
	I1028 17:08:24.957775       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-803184_e9eb44be-1eca-44fd-a052-5a56aabaeb8b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-803184 -n addons-803184
helpers_test.go:261: (dbg) Run:  kubectl --context addons-803184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (314.36s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8026a6e4-59aa-432e-8d32-5fd77b054a42] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004503886s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-301254 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-301254 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-301254 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-301254 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [54095398-6c59-4521-8b12-a906fe652ec4] Pending
helpers_test.go:344: "sp-pod" [54095398-6c59-4521-8b12-a906fe652ec4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-301254 -n functional-301254
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-10-28 17:22:43.010811533 +0000 UTC m=+954.451098627
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-301254 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-301254 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-301254/192.168.49.2
Start Time:       Mon, 28 Oct 2024 17:19:42 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:  10.244.0.11
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ljlb4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-ljlb4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/sp-pod to functional-301254
Warning  Failed     2m14s                kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     90s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    62s (x3 over 3m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     27s (x3 over 2m14s)  kubelet            Error: ErrImagePull
Warning  Failed     27s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    4s (x4 over 2m13s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     4s (x4 over 2m13s)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-301254 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-301254 logs sp-pod -n default: exit status 1 (67.753645ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-301254 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-301254
helpers_test.go:235: (dbg) docker inspect functional-301254:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f87adbb047e24cb4d6050b4e4c68fd75c7b66563743c59a0f35b4ff209962ae8",
	        "Created": "2024-10-28T17:17:28.56577742Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 135348,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-28T17:17:28.681215627Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b614a1ff29c6e85b537175184edffd528c6bd99b5b0eb51bb6059bd4ad5ba0a2",
	        "ResolvConfPath": "/var/lib/docker/containers/f87adbb047e24cb4d6050b4e4c68fd75c7b66563743c59a0f35b4ff209962ae8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f87adbb047e24cb4d6050b4e4c68fd75c7b66563743c59a0f35b4ff209962ae8/hostname",
	        "HostsPath": "/var/lib/docker/containers/f87adbb047e24cb4d6050b4e4c68fd75c7b66563743c59a0f35b4ff209962ae8/hosts",
	        "LogPath": "/var/lib/docker/containers/f87adbb047e24cb4d6050b4e4c68fd75c7b66563743c59a0f35b4ff209962ae8/f87adbb047e24cb4d6050b4e4c68fd75c7b66563743c59a0f35b4ff209962ae8-json.log",
	        "Name": "/functional-301254",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-301254:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-301254",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9c8d0dee3380b8ed7ab697d50e5b2f43284d921ae68391541ba744d9252530b5-init/diff:/var/lib/docker/overlay2/6f44dcb837d0e69b1b3a1c42f8a8e838d4ec916efe93e3f6d6a8c0411f4e43e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9c8d0dee3380b8ed7ab697d50e5b2f43284d921ae68391541ba744d9252530b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9c8d0dee3380b8ed7ab697d50e5b2f43284d921ae68391541ba744d9252530b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9c8d0dee3380b8ed7ab697d50e5b2f43284d921ae68391541ba744d9252530b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-301254",
	                "Source": "/var/lib/docker/volumes/functional-301254/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-301254",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-301254",
	                "name.minikube.sigs.k8s.io": "functional-301254",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05d7af72f151278e7a6fb7c39d7f3516b4e91d623103cf6ef9f20ac669263ded",
	            "SandboxKey": "/var/run/docker/netns/05d7af72f151",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-301254": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "bd109eedba180d875cc9ce5ddc59d47050c25621bcfee95b4a69a2c2c9fcce37",
	                    "EndpointID": "13b5ba264dffb02cc514a99cf2df3084d601914538a6a7cf9f9eedb27a136b20",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-301254",
	                        "f87adbb047e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-301254 -n functional-301254
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-301254 logs -n 25: (1.437762631s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-301254 ssh sudo                                            | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC |                     |
	|                | umount -f /mount-9p                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-301254                                                  | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup318751788/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                   |         |         |                     |                     |
	| mount          | -p functional-301254                                                  | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup318751788/001:/mount1 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                   |         |         |                     |                     |
	| ssh            | functional-301254 ssh findmnt                                         | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC |                     |
	|                | -T /mount1                                                            |                   |         |         |                     |                     |
	| mount          | -p functional-301254                                                  | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup318751788/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                   |         |         |                     |                     |
	| ssh            | functional-301254 ssh findmnt                                         | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC | 28 Oct 24 17:19 UTC |
	|                | -T /mount1                                                            |                   |         |         |                     |                     |
	| ssh            | functional-301254 ssh findmnt                                         | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC | 28 Oct 24 17:19 UTC |
	|                | -T /mount2                                                            |                   |         |         |                     |                     |
	| ssh            | functional-301254 ssh findmnt                                         | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC | 28 Oct 24 17:19 UTC |
	|                | -T /mount3                                                            |                   |         |         |                     |                     |
	| mount          | -p functional-301254                                                  | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC |                     |
	|                | --kill=true                                                           |                   |         |         |                     |                     |
	| tunnel         | functional-301254 tunnel                                              | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| tunnel         | functional-301254 tunnel                                              | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| tunnel         | functional-301254 tunnel                                              | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| addons         | functional-301254 addons list                                         | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC | 28 Oct 24 17:19 UTC |
	| addons         | functional-301254 addons list                                         | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC | 28 Oct 24 17:19 UTC |
	|                | -o json                                                               |                   |         |         |                     |                     |
	| service        | functional-301254 service                                             | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC | 28 Oct 24 17:19 UTC |
	|                | hello-node-connect --url                                              |                   |         |         |                     |                     |
	| update-context | functional-301254                                                     | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC | 28 Oct 24 17:19 UTC |
	|                | update-context                                                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                   |         |         |                     |                     |
	| update-context | functional-301254                                                     | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC | 28 Oct 24 17:19 UTC |
	|                | update-context                                                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                   |         |         |                     |                     |
	| update-context | functional-301254                                                     | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC | 28 Oct 24 17:19 UTC |
	|                | update-context                                                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                   |         |         |                     |                     |
	| image          | functional-301254                                                     | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC | 28 Oct 24 17:19 UTC |
	|                | image ls --format short                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-301254                                                     | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC | 28 Oct 24 17:19 UTC |
	|                | image ls --format yaml                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| ssh            | functional-301254 ssh pgrep                                           | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC |                     |
	|                | buildkitd                                                             |                   |         |         |                     |                     |
	| image          | functional-301254 image build -t                                      | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC | 28 Oct 24 17:19 UTC |
	|                | localhost/my-image:functional-301254                                  |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                      |                   |         |         |                     |                     |
	| image          | functional-301254 image ls                                            | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC | 28 Oct 24 17:19 UTC |
	| image          | functional-301254                                                     | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC | 28 Oct 24 17:19 UTC |
	|                | image ls --format json                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-301254                                                     | functional-301254 | jenkins | v1.34.0 | 28 Oct 24 17:19 UTC | 28 Oct 24 17:19 UTC |
	|                | image ls --format table                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	|----------------|-----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:19:28
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:19:28.070647  147327 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:19:28.070751  147327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:19:28.070761  147327 out.go:358] Setting ErrFile to fd 2...
	I1028 17:19:28.070766  147327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:19:28.071024  147327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
	I1028 17:19:28.071579  147327 out.go:352] Setting JSON to false
	I1028 17:19:28.072545  147327 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3709,"bootTime":1730132259,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:19:28.072660  147327 start.go:139] virtualization: kvm guest
	I1028 17:19:28.074933  147327 out.go:177] * [functional-301254] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1028 17:19:28.077120  147327 notify.go:220] Checking for updates...
	I1028 17:19:28.077201  147327 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:19:28.079141  147327 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:19:28.080922  147327 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig
	I1028 17:19:28.083050  147327 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube
	I1028 17:19:28.084551  147327 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:19:28.086189  147327 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:19:28.087754  147327 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:19:28.088421  147327 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:19:28.116120  147327 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 17:19:28.116236  147327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 17:19:28.175538  147327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:59 SystemTime:2024-10-28 17:19:28.16580616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 17:19:28.175637  147327 docker.go:318] overlay module found
	I1028 17:19:28.177343  147327 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1028 17:19:28.178649  147327 start.go:297] selected driver: docker
	I1028 17:19:28.178689  147327 start.go:901] validating driver "docker" against &{Name:functional-301254 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-301254 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:19:28.178822  147327 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:19:28.181156  147327 out.go:201] 
	W1028 17:19:28.182419  147327 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1028 17:19:28.183685  147327 out.go:201] 
	
	
	==> CRI-O <==
	Oct 28 17:19:57 functional-301254 crio[4917]: time="2024-10-28 17:19:57.162756989Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=f2e6fad0-32ee-4ca4-89c9-9407f2fecdd2 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 17:19:57 functional-301254 crio[4917]: time="2024-10-28 17:19:57.163453199Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 28 17:19:57 functional-301254 crio[4917]: time="2024-10-28 17:19:57.164430279Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,RepoTags:[docker.io/library/nginx:alpine],RepoDigests:[docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250 docker.io/library/nginx@sha256:ae136e431e76e12e5d84979ea5e2ffff4dd9589c2435c8bb9e33e6c3960111d3],Size_:48414943,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f2e6fad0-32ee-4ca4-89c9-9407f2fecdd2 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 17:19:57 functional-301254 crio[4917]: time="2024-10-28 17:19:57.165375919Z" level=info msg="Creating container: default/nginx-svc/nginx" id=0a569303-70bf-4d2e-8d1f-c2c5a2c81223 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 28 17:19:57 functional-301254 crio[4917]: time="2024-10-28 17:19:57.165481289Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 28 17:19:57 functional-301254 crio[4917]: time="2024-10-28 17:19:57.270435587Z" level=info msg="Created container 53059ba0ffe68cbf2248f1c100a5463c4197d677a5ca063de0de4ff3a824f869: default/nginx-svc/nginx" id=0a569303-70bf-4d2e-8d1f-c2c5a2c81223 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 28 17:19:57 functional-301254 crio[4917]: time="2024-10-28 17:19:57.271021225Z" level=info msg="Starting container: 53059ba0ffe68cbf2248f1c100a5463c4197d677a5ca063de0de4ff3a824f869" id=fb9db45e-699f-4f5d-b62f-706b8237e8e7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 28 17:19:57 functional-301254 crio[4917]: time="2024-10-28 17:19:57.329262142Z" level=info msg="Started container" PID=8938 containerID=53059ba0ffe68cbf2248f1c100a5463c4197d677a5ca063de0de4ff3a824f869 description=default/nginx-svc/nginx id=fb9db45e-699f-4f5d-b62f-706b8237e8e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4062d6f6dd543a05fe34d18740cab446640dd99b9bf5bd3ae516ecc14442c7dc
	Oct 28 17:19:58 functional-301254 crio[4917]: time="2024-10-28 17:19:58.251188327Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 28 17:20:30 functional-301254 crio[4917]: time="2024-10-28 17:20:30.335177761Z" level=info msg="Checking image status: docker.io/nginx:latest" id=1e7fbd5c-bb7d-459e-aeb9-8566f6e1d943 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 17:20:30 functional-301254 crio[4917]: time="2024-10-28 17:20:30.335407760Z" level=info msg="Image docker.io/nginx:latest not found" id=1e7fbd5c-bb7d-459e-aeb9-8566f6e1d943 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 17:20:41 functional-301254 crio[4917]: time="2024-10-28 17:20:41.957987554Z" level=info msg="Checking image status: docker.io/nginx:latest" id=c45371a3-c74a-4879-895f-2ee51237aba0 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 17:20:41 functional-301254 crio[4917]: time="2024-10-28 17:20:41.958305618Z" level=info msg="Image docker.io/nginx:latest not found" id=c45371a3-c74a-4879-895f-2ee51237aba0 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 17:20:41 functional-301254 crio[4917]: time="2024-10-28 17:20:41.958763252Z" level=info msg="Pulling image: docker.io/nginx:latest" id=ce37d2fe-df28-41be-b686-5ee81d2110f0 name=/runtime.v1.ImageService/PullImage
	Oct 28 17:20:41 functional-301254 crio[4917]: time="2024-10-28 17:20:41.966217280Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 28 17:21:26 functional-301254 crio[4917]: time="2024-10-28 17:21:26.958742570Z" level=info msg="Checking image status: docker.io/nginx:latest" id=ff9474a1-8a98-4bbf-80dd-be815939f8bb name=/runtime.v1.ImageService/ImageStatus
	Oct 28 17:21:26 functional-301254 crio[4917]: time="2024-10-28 17:21:26.959023545Z" level=info msg="Image docker.io/nginx:latest not found" id=ff9474a1-8a98-4bbf-80dd-be815939f8bb name=/runtime.v1.ImageService/ImageStatus
	Oct 28 17:21:41 functional-301254 crio[4917]: time="2024-10-28 17:21:41.958636011Z" level=info msg="Checking image status: docker.io/nginx:latest" id=b8248e55-dd5a-4dad-885d-72042f733739 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 17:21:41 functional-301254 crio[4917]: time="2024-10-28 17:21:41.958901176Z" level=info msg="Image docker.io/nginx:latest not found" id=b8248e55-dd5a-4dad-885d-72042f733739 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 17:21:41 functional-301254 crio[4917]: time="2024-10-28 17:21:41.959507047Z" level=info msg="Pulling image: docker.io/nginx:latest" id=01fc6244-f56b-44f6-b950-b7203db4bcbf name=/runtime.v1.ImageService/PullImage
	Oct 28 17:21:41 functional-301254 crio[4917]: time="2024-10-28 17:21:41.961013633Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 28 17:22:28 functional-301254 crio[4917]: time="2024-10-28 17:22:28.958268208Z" level=info msg="Checking image status: docker.io/nginx:latest" id=38910eb2-cf4f-4d33-a59a-fbc5f5511e65 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 17:22:28 functional-301254 crio[4917]: time="2024-10-28 17:22:28.958568544Z" level=info msg="Image docker.io/nginx:latest not found" id=38910eb2-cf4f-4d33-a59a-fbc5f5511e65 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 17:22:39 functional-301254 crio[4917]: time="2024-10-28 17:22:39.958575595Z" level=info msg="Checking image status: docker.io/nginx:latest" id=568f8713-140e-4009-b827-86d9c34c808e name=/runtime.v1.ImageService/ImageStatus
	Oct 28 17:22:39 functional-301254 crio[4917]: time="2024-10-28 17:22:39.958837759Z" level=info msg="Image docker.io/nginx:latest not found" id=568f8713-140e-4009-b827-86d9c34c808e name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	53059ba0ffe68       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                  2 minutes ago       Running             nginx                       0                   4062d6f6dd543       nginx-svc
	a5da959624002       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  2 minutes ago       Running             mysql                       0                   015938f3b7423       mysql-6cdb49bbb-822lv
	df4e4334155e0       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   3 minutes ago       Running             dashboard-metrics-scraper   0                   950f24b8fe0bf       dashboard-metrics-scraper-c5db448b4-6hg7s
	e00f923a8faaa       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 3 minutes ago       Running             echoserver                  0                   a69e949c588ba       hello-node-connect-67bdd5bbb4-kd7x9
	c837e3eeb7d64       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         3 minutes ago       Running             kubernetes-dashboard        0                   f0905c50dd2f1       kubernetes-dashboard-695b96c756-cjtd5
	566f36ec79ce1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              3 minutes ago       Exited              mount-munger                0                   85dbb7e409c14       busybox-mount
	0328398a1c3d8       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago       Running             echoserver                  0                   148c862fcb41f       hello-node-6b9f76b5c7-bs2m4
	cce79f1466095       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago       Running             coredns                     2                   31b7da83f6979       coredns-7c65d6cfc9-9jblm
	387c70e553be3       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                 3 minutes ago       Running             kube-proxy                  2                   d1742ce6ce329       kube-proxy-6ckwd
	8146349bd31f5       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                                 3 minutes ago       Running             kindnet-cni                 2                   afba900c07ddd       kindnet-kjg57
	df0fdfeac19a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago       Running             storage-provisioner         3                   f165bd9a36939       storage-provisioner
	c56816940b680       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                 3 minutes ago       Running             kube-apiserver              0                   261fdb1bee525       kube-apiserver-functional-301254
	c9a8dffa51d6b       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                 3 minutes ago       Running             kube-controller-manager     2                   eef4f4884877e       kube-controller-manager-functional-301254
	b1b04d7a94bfd       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                 3 minutes ago       Running             kube-scheduler              2                   88e3f953cc249       kube-scheduler-functional-301254
	e8530ca575dc5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 3 minutes ago       Running             etcd                        2                   ab485e2532717       etcd-functional-301254
	472985ddd7131       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago       Exited              storage-provisioner         2                   f165bd9a36939       storage-provisioner
	b43395eedf9db       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago       Exited              coredns                     1                   31b7da83f6979       coredns-7c65d6cfc9-9jblm
	9ae82c6364c9c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 4 minutes ago       Exited              etcd                        1                   ab485e2532717       etcd-functional-301254
	d217ab9abbca9       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                 4 minutes ago       Exited              kube-scheduler              1                   88e3f953cc249       kube-scheduler-functional-301254
	0e4398d1baf53       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                                 4 minutes ago       Exited              kindnet-cni                 1                   afba900c07ddd       kindnet-kjg57
	89323fe8ada25       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                 4 minutes ago       Exited              kube-proxy                  1                   d1742ce6ce329       kube-proxy-6ckwd
	263474c2b7eb7       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                 4 minutes ago       Exited              kube-controller-manager     1                   eef4f4884877e       kube-controller-manager-functional-301254
	
	
	==> coredns [b43395eedf9db1e950d00b87c814ca9c3c080c6253c0162ddae623f0a38fff6d] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45390 - 8651 "HINFO IN 5688139913029301871.6555951845669390486. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.084731359s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cce79f1466095725a914d4a10953bc1458bad3fec6941ed0eb720a549ba858ad] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55871 - 9567 "HINFO IN 2849144469809223147.6327703132125255708. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009240904s
	
	
	==> describe nodes <==
	Name:               functional-301254
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-301254
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=functional-301254
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T17_17_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:17:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-301254
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:22:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:20:29 +0000   Mon, 28 Oct 2024 17:17:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:20:29 +0000   Mon, 28 Oct 2024 17:17:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:20:29 +0000   Mon, 28 Oct 2024 17:17:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:20:29 +0000   Mon, 28 Oct 2024 17:17:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-301254
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 509a4c00dd56443db2051865463a0876
	  System UUID:                61ad2ba9-ca1f-4292-8e70-78a8357b65b0
	  Boot ID:                    9ca5ee1d-76d3-40f6-894f-a30303f688cc
	  Kernel Version:             5.15.0-1070-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-bs2m4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	  default                     hello-node-connect-67bdd5bbb4-kd7x9          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     mysql-6cdb49bbb-822lv                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     3m8s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-7c65d6cfc9-9jblm                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m58s
	  kube-system                 etcd-functional-301254                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m3s
	  kube-system                 kindnet-kjg57                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m58s
	  kube-system                 kube-apiserver-functional-301254             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 kube-controller-manager-functional-301254    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-proxy-6ckwd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-scheduler-functional-301254             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-6hg7s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-cjtd5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m56s                  kube-proxy       
	  Normal   Starting                 3m45s                  kube-proxy       
	  Normal   Starting                 4m31s                  kube-proxy       
	  Warning  CgroupV1                 5m3s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m3s                   kubelet          Node functional-301254 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m3s                   kubelet          Node functional-301254 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m3s                   kubelet          Node functional-301254 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m3s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m58s                  node-controller  Node functional-301254 event: Registered Node functional-301254 in Controller
	  Normal   NodeReady                4m47s                  kubelet          Node functional-301254 status is now: NodeReady
	  Normal   RegisteredNode           4m28s                  node-controller  Node functional-301254 event: Registered Node functional-301254 in Controller
	  Normal   Starting                 3m51s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m51s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  3m50s (x8 over 3m50s)  kubelet          Node functional-301254 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m50s (x8 over 3m50s)  kubelet          Node functional-301254 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m50s (x7 over 3m50s)  kubelet          Node functional-301254 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m44s                  node-controller  Node functional-301254 event: Registered Node functional-301254 in Controller
	
	
	==> dmesg <==
	[  +2.015817] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[  +4.127681] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[  +8.195365] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[Oct28 17:12] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[ +32.253574] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 60 89 a2 f9 64 7e f2 c6 70 f8 f0 08 00
	[Oct28 17:19] FS-Cache: Duplicate cookie detected
	[  +0.004689] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006754] FS-Cache: O-cookie d=0000000025aafb91{9P.session} n=00000000d59ffe6c
	[  +0.007547] FS-Cache: O-key=[10] '34323935383231363439'
	[  +0.005428] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.007945] FS-Cache: N-cookie d=0000000025aafb91{9P.session} n=00000000d360b8ae
	[  +0.008932] FS-Cache: N-key=[10] '34323935383231363439'
	[  +0.006703] FS-Cache: Duplicate cookie detected
	[  +0.006037] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.008180] FS-Cache: O-cookie d=0000000025aafb91{9P.session} n=00000000d59ffe6c
	[  +0.008907] FS-Cache: O-key=[10] '34323935383231363439'
	[  +0.006725] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.007948] FS-Cache: N-cookie d=0000000025aafb91{9P.session} n=00000000ec41bce6
	[  +0.008899] FS-Cache: N-key=[10] '34323935383231363439'
	[ +19.824428] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [9ae82c6364c9cd275209af15e29583ece650e231419152732f88126544e7eded] <==
	{"level":"info","ts":"2024-10-28T17:18:11.644258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-28T17:18:11.644281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-28T17:18:11.644296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-10-28T17:18:11.644301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-10-28T17:18:11.644322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-10-28T17:18:11.644329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-10-28T17:18:11.646244Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-301254 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T17:18:11.646248Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T17:18:11.646272Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T17:18:11.646518Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T17:18:11.646542Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T17:18:11.647257Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T17:18:11.648478Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-10-28T17:18:11.648198Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T17:18:11.649974Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T17:18:40.034137Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-28T17:18:40.034211Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-301254","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-10-28T17:18:40.034297Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T17:18:40.034407Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T17:18:40.048144Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T17:18:40.048206Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-28T17:18:40.048321Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-10-28T17:18:40.051036Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-28T17:18:40.051131Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-28T17:18:40.051142Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-301254","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [e8530ca575dc501a0a5a59ddb1898c9d97fc2dda08002c3a660064d8b6ea06d4] <==
	{"level":"info","ts":"2024-10-28T17:18:54.636637Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T17:18:54.636752Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-28T17:18:54.636929Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-28T17:18:54.636985Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-28T17:18:54.639336Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T17:18:54.639473Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-28T17:18:54.639822Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-28T17:18:54.639971Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T17:18:54.640019Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T17:18:56.163263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-28T17:18:56.163322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-28T17:18:56.163376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-10-28T17:18:56.163398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-10-28T17:18:56.163406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-10-28T17:18:56.163416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-10-28T17:18:56.163436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-10-28T17:18:56.164510Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T17:18:56.164543Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T17:18:56.164506Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-301254 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T17:18:56.164786Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T17:18:56.164814Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T17:18:56.165398Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T17:18:56.165484Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T17:18:56.166140Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T17:18:56.166190Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 17:22:44 up  1:05,  0 users,  load average: 0.34, 0.51, 0.73
	Linux functional-301254 5.15.0-1070-gcp #78~20.04.1-Ubuntu SMP Wed Oct 9 22:05:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [0e4398d1baf533409f2ba9aebb9021f1f0339578f44dd7a00e0eb883fdce53a4] <==
	I1028 17:18:10.135740       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1028 17:18:10.136167       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1028 17:18:10.136381       1 main.go:148] setting mtu 1500 for CNI 
	I1028 17:18:10.136432       1 main.go:178] kindnetd IP family: "ipv4"
	I1028 17:18:10.136490       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1028 17:18:10.560856       1 controller.go:338] Starting controller kube-network-policies
	I1028 17:18:10.560975       1 controller.go:342] Waiting for informer caches to sync
	I1028 17:18:10.561009       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1028 17:18:12.927996       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1028 17:18:12.928182       1 metrics.go:61] Registering metrics
	I1028 17:18:12.928287       1 controller.go:378] Syncing nftables rules
	I1028 17:18:20.562711       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:18:20.562778       1 main.go:300] handling current node
	I1028 17:18:30.560997       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:18:30.561066       1 main.go:300] handling current node
	
	
	==> kindnet [8146349bd31f5ca12441ce2ee2e56984467076f59077e5e2c15983b11ec4f346] <==
	I1028 17:20:39.035930       1 main.go:300] handling current node
	I1028 17:20:49.035414       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:20:49.035454       1 main.go:300] handling current node
	I1028 17:20:59.029702       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:20:59.029753       1 main.go:300] handling current node
	I1028 17:21:09.038174       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:21:09.038214       1 main.go:300] handling current node
	I1028 17:21:19.038323       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:21:19.038359       1 main.go:300] handling current node
	I1028 17:21:29.028826       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:21:29.028884       1 main.go:300] handling current node
	I1028 17:21:39.035723       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:21:39.035772       1 main.go:300] handling current node
	I1028 17:21:49.028932       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:21:49.028972       1 main.go:300] handling current node
	I1028 17:21:59.029432       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:21:59.029467       1 main.go:300] handling current node
	I1028 17:22:09.037642       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:22:09.037699       1 main.go:300] handling current node
	I1028 17:22:19.030605       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:22:19.030640       1 main.go:300] handling current node
	I1028 17:22:29.029396       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:22:29.029434       1 main.go:300] handling current node
	I1028 17:22:39.037900       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 17:22:39.037938       1 main.go:300] handling current node
	
	
	==> kube-apiserver [c56816940b6808c82da4177ababc8a340ed3d8e0ccb47f8514eb8582398e3a6c] <==
	I1028 17:18:57.244330       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E1028 17:18:57.253259       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1028 17:18:57.328007       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1028 17:18:57.337391       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 17:18:58.089811       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1028 17:18:58.911179       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 17:18:59.012060       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 17:18:59.023215       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 17:18:59.087837       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 17:18:59.096193       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1028 17:19:00.751829       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 17:19:00.849890       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 17:19:19.944122       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.60.253"}
	I1028 17:19:24.214795       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1028 17:19:24.344466       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.227.50"}
	I1028 17:19:30.724681       1 controller.go:615] quota admission added evaluator for: namespaces
	I1028 17:19:30.887409       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.106.215"}
	I1028 17:19:30.953472       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.231.16"}
	I1028 17:19:36.318912       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.77.23"}
	I1028 17:19:38.963415       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.38.29"}
	I1028 17:19:41.445621       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.130.73"}
	E1028 17:19:58.530616       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:52164: use of closed network connection
	E1028 17:19:59.365273       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:52170: use of closed network connection
	E1028 17:20:01.130734       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53038: use of closed network connection
	E1028 17:20:02.590955       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53054: use of closed network connection
	
	
	==> kube-controller-manager [263474c2b7eb753c01205dd3acf583eaa65ca47d017c3c38cfdff8266fa96b66] <==
	I1028 17:18:16.241054       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1028 17:18:16.250439       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1028 17:18:16.267434       1 shared_informer.go:320] Caches are synced for persistent volume
	I1028 17:18:16.270777       1 shared_informer.go:320] Caches are synced for ephemeral
	I1028 17:18:16.288580       1 shared_informer.go:320] Caches are synced for HPA
	I1028 17:18:16.288623       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1028 17:18:16.288652       1 shared_informer.go:320] Caches are synced for deployment
	I1028 17:18:16.289190       1 shared_informer.go:320] Caches are synced for GC
	I1028 17:18:16.289963       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1028 17:18:16.339436       1 shared_informer.go:320] Caches are synced for stateful set
	I1028 17:18:16.360675       1 shared_informer.go:320] Caches are synced for disruption
	I1028 17:18:16.362876       1 shared_informer.go:320] Caches are synced for taint
	I1028 17:18:16.362984       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1028 17:18:16.363075       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-301254"
	I1028 17:18:16.363149       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1028 17:18:16.384929       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 17:18:16.393746       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 17:18:16.439186       1 shared_informer.go:320] Caches are synced for daemon sets
	I1028 17:18:16.497398       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="208.70896ms"
	I1028 17:18:16.497595       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.481µs"
	I1028 17:18:16.805821       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 17:18:16.857104       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 17:18:16.857133       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1028 17:18:18.204508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.02187ms"
	I1028 17:18:18.204605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.619µs"
	
	
	==> kube-controller-manager [c9a8dffa51d6bff2b6dd3e2bd44bbe53d9e3339380db87ace7b33707879935f3] <==
	I1028 17:19:30.869348       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.817988ms"
	I1028 17:19:30.876401       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.905184ms"
	I1028 17:19:30.876495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="51.558µs"
	I1028 17:19:30.882794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="54.815µs"
	I1028 17:19:30.931992       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="54.542898ms"
	I1028 17:19:30.954559       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="22.516029ms"
	I1028 17:19:30.967855       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="13.090499ms"
	I1028 17:19:30.967985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="79.158µs"
	I1028 17:19:36.392280       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="18.492478ms"
	I1028 17:19:36.429635       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="37.302331ms"
	I1028 17:19:36.430016       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="72.999µs"
	I1028 17:19:36.438420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="69.893µs"
	I1028 17:19:39.190521       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.210447ms"
	I1028 17:19:39.190616       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="53.956µs"
	I1028 17:19:41.375918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="12.676324ms"
	I1028 17:19:41.388487       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="12.513634ms"
	I1028 17:19:41.388590       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="58.427µs"
	I1028 17:19:43.204097       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.080839ms"
	I1028 17:19:43.204912       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="75.741µs"
	I1028 17:19:43.213424       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="8.174738ms"
	I1028 17:19:43.213519       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="48.028µs"
	I1028 17:19:53.268220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="9.013301ms"
	I1028 17:19:53.268333       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="64.394µs"
	I1028 17:19:59.103026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-301254"
	I1028 17:20:29.986919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-301254"
	
	
	==> kube-proxy [387c70e553be31aa76b40a7721f562f9a4151792cd8be35547f2bdc1633c2aac] <==
	I1028 17:18:58.532756       1 server_linux.go:66] "Using iptables proxy"
	I1028 17:18:58.660797       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1028 17:18:58.660868       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 17:18:58.680453       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1028 17:18:58.680514       1 server_linux.go:169] "Using iptables Proxier"
	I1028 17:18:58.682419       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 17:18:58.682772       1 server.go:483] "Version info" version="v1.31.2"
	I1028 17:18:58.682799       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 17:18:58.683964       1 config.go:199] "Starting service config controller"
	I1028 17:18:58.683996       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 17:18:58.683998       1 config.go:105] "Starting endpoint slice config controller"
	I1028 17:18:58.684020       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 17:18:58.684050       1 config.go:328] "Starting node config controller"
	I1028 17:18:58.684117       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 17:18:58.784833       1 shared_informer.go:320] Caches are synced for node config
	I1028 17:18:58.784970       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 17:18:58.784998       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [89323fe8ada255399a0afe0cf491a95631c2f4cf4195320c3585571d23348e6f] <==
	I1028 17:18:10.138930       1 server_linux.go:66] "Using iptables proxy"
	E1028 17:18:10.358222       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-301254\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I1028 17:18:12.835817       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1028 17:18:12.837662       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 17:18:13.047439       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1028 17:18:13.047490       1 server_linux.go:169] "Using iptables Proxier"
	I1028 17:18:13.049688       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 17:18:13.050135       1 server.go:483] "Version info" version="v1.31.2"
	I1028 17:18:13.050173       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 17:18:13.051306       1 config.go:199] "Starting service config controller"
	I1028 17:18:13.051392       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 17:18:13.051341       1 config.go:105] "Starting endpoint slice config controller"
	I1028 17:18:13.051474       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 17:18:13.051697       1 config.go:328] "Starting node config controller"
	I1028 17:18:13.051712       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 17:18:13.152478       1 shared_informer.go:320] Caches are synced for service config
	I1028 17:18:13.152543       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 17:18:13.154115       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b1b04d7a94bfdee746921a54155313824db4796b2a8e731951215827cafcc698] <==
	I1028 17:18:55.271044       1 serving.go:386] Generated self-signed cert in-memory
	I1028 17:18:57.330218       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 17:18:57.330260       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 17:18:57.335345       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I1028 17:18:57.335396       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1028 17:18:57.335418       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 17:18:57.335452       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 17:18:57.335471       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1028 17:18:57.335506       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1028 17:18:57.335735       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 17:18:57.335756       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 17:18:57.436258       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 17:18:57.436310       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1028 17:18:57.436323       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kube-scheduler [d217ab9abbca972ea04ec304f27921429e380b2b9adceae5a9b9d075c244a5bf] <==
	I1028 17:18:11.388288       1 serving.go:386] Generated self-signed cert in-memory
	W1028 17:18:12.746875       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 17:18:12.746918       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 17:18:12.746931       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 17:18:12.746961       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 17:18:12.846841       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 17:18:12.846953       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 17:18:12.850043       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 17:18:12.850158       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 17:18:12.850187       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 17:18:12.850204       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 17:18:12.950552       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 17:18:40.035001       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1028 17:18:40.035141       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1028 17:18:40.035375       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1028 17:18:40.035454       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 28 17:21:24 functional-301254 kubelet[5279]: E1028 17:21:24.076697    5279 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136084076492727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:21:24 functional-301254 kubelet[5279]: E1028 17:21:24.076740    5279 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136084076492727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:21:26 functional-301254 kubelet[5279]: E1028 17:21:26.959311    5279 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="54095398-6c59-4521-8b12-a906fe652ec4"
	Oct 28 17:21:34 functional-301254 kubelet[5279]: E1028 17:21:34.078145    5279 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136094077961071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:21:34 functional-301254 kubelet[5279]: E1028 17:21:34.078181    5279 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136094077961071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:21:44 functional-301254 kubelet[5279]: E1028 17:21:44.079511    5279 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136104079335608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:21:44 functional-301254 kubelet[5279]: E1028 17:21:44.079551    5279 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136104079335608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:21:54 functional-301254 kubelet[5279]: E1028 17:21:54.080959    5279 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136114080776917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:21:54 functional-301254 kubelet[5279]: E1028 17:21:54.081003    5279 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136114080776917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:22:04 functional-301254 kubelet[5279]: E1028 17:22:04.082579    5279 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136124082387738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:22:04 functional-301254 kubelet[5279]: E1028 17:22:04.082630    5279 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136124082387738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:22:14 functional-301254 kubelet[5279]: E1028 17:22:14.084131    5279 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136134083884012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:22:14 functional-301254 kubelet[5279]: E1028 17:22:14.084173    5279 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136134083884012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:22:16 functional-301254 kubelet[5279]: E1028 17:22:16.114256    5279 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 28 17:22:16 functional-301254 kubelet[5279]: E1028 17:22:16.114323    5279 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 28 17:22:16 functional-301254 kubelet[5279]: E1028 17:22:16.114441    5279 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljlb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(54095398-6c59-4521-8b12-a906fe652ec4): ErrImagePull: loading manifest for target platform: reading manifest sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 28 17:22:16 functional-301254 kubelet[5279]: E1028 17:22:16.115612    5279 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="54095398-6c59-4521-8b12-a906fe652ec4"
	Oct 28 17:22:24 functional-301254 kubelet[5279]: E1028 17:22:24.085768    5279 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136144085562627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:22:24 functional-301254 kubelet[5279]: E1028 17:22:24.085815    5279 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136144085562627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:22:28 functional-301254 kubelet[5279]: E1028 17:22:28.958890    5279 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="54095398-6c59-4521-8b12-a906fe652ec4"
	Oct 28 17:22:34 functional-301254 kubelet[5279]: E1028 17:22:34.087936    5279 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136154087745175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:22:34 functional-301254 kubelet[5279]: E1028 17:22:34.087974    5279 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136154087745175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:22:39 functional-301254 kubelet[5279]: E1028 17:22:39.959060    5279 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="54095398-6c59-4521-8b12-a906fe652ec4"
	Oct 28 17:22:44 functional-301254 kubelet[5279]: E1028 17:22:44.089581    5279 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136164089396363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:22:44 functional-301254 kubelet[5279]: E1028 17:22:44.089623    5279 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136164089396363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:284619,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [c837e3eeb7d64c073f0ddbe34b35b4decc84a449e3c6188edce50a1b5c6b7187] <==
	2024/10/28 17:19:38 Using namespace: kubernetes-dashboard
	2024/10/28 17:19:38 Using in-cluster config to connect to apiserver
	2024/10/28 17:19:38 Using secret token for csrf signing
	2024/10/28 17:19:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/28 17:19:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/28 17:19:38 Successful initial request to the apiserver, version: v1.31.2
	2024/10/28 17:19:38 Generating JWE encryption key
	2024/10/28 17:19:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/28 17:19:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/28 17:19:38 Initializing JWE encryption key from synchronized object
	2024/10/28 17:19:38 Creating in-cluster Sidecar client
	2024/10/28 17:19:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 17:19:38 Serving insecurely on HTTP port: 9090
	2024/10/28 17:20:08 Successful request to sidecar
	2024/10/28 17:19:38 Starting overwatch
	
	
	==> storage-provisioner [472985ddd7131d3cb1f058b21ca4523989fab0d3d6d502f1671faee7348789f3] <==
	I1028 17:18:25.610558       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 17:18:25.617766       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 17:18:25.617802       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [df0fdfeac19a7e8578e95fc9e3d479a9b2960f11aa580f06ac4b4cdb4887930f] <==
	I1028 17:18:58.444916       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 17:18:58.453310       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 17:18:58.453431       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 17:19:15.848894       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 17:19:15.849092       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-301254_74e85247-db6a-491b-9d51-d6a174d6fc9a!
	I1028 17:19:15.849082       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"954f6af9-969d-4669-a56e-4ff54451e05f", APIVersion:"v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-301254_74e85247-db6a-491b-9d51-d6a174d6fc9a became leader
	I1028 17:19:15.949331       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-301254_74e85247-db6a-491b-9d51-d6a174d6fc9a!
	I1028 17:19:42.528123       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1028 17:19:42.528449       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"5fce3f6f-cfba-4bd8-93d8-5ee3a8dbe4c3", APIVersion:"v1", ResourceVersion:"819", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1028 17:19:42.528203       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    b2ba6e0a-09d7-4a7a-b989-a1acbfab8573 345 0 2024-10-28 17:17:47 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-10-28 17:17:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-5fce3f6f-cfba-4bd8-93d8-5ee3a8dbe4c3 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  5fce3f6f-cfba-4bd8-93d8-5ee3a8dbe4c3 819 0 2024-10-28 17:19:42 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-10-28 17:19:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-10-28 17:19:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1028 17:19:42.528734       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-5fce3f6f-cfba-4bd8-93d8-5ee3a8dbe4c3" provisioned
	I1028 17:19:42.528758       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1028 17:19:42.528764       1 volume_store.go:212] Trying to save persistentvolume "pvc-5fce3f6f-cfba-4bd8-93d8-5ee3a8dbe4c3"
	I1028 17:19:42.535975       1 volume_store.go:219] persistentvolume "pvc-5fce3f6f-cfba-4bd8-93d8-5ee3a8dbe4c3" saved
	I1028 17:19:42.536282       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"5fce3f6f-cfba-4bd8-93d8-5ee3a8dbe4c3", APIVersion:"v1", ResourceVersion:"819", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-5fce3f6f-cfba-4bd8-93d8-5ee3a8dbe4c3
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-301254 -n functional-301254
helpers_test.go:261: (dbg) Run:  kubectl --context functional-301254 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-301254 describe pod busybox-mount sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-301254 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-301254/192.168.49.2
	Start Time:       Mon, 28 Oct 2024 17:19:27 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://566f36ec79ce189017de956ccbdd5da3fdb8788079dd501223d31bbfc0f647c9
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 28 Oct 2024 17:19:31 +0000
	      Finished:     Mon, 28 Oct 2024 17:19:31 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rtvhb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-rtvhb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m17s  default-scheduler  Successfully assigned default/busybox-mount to functional-301254
	  Normal  Pulling    3m17s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m14s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.123s (3.123s including waiting). Image size: 4631262 bytes.
	  Normal  Created    3m14s  kubelet            Created container mount-munger
	  Normal  Started    3m14s  kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-301254/192.168.49.2
	Start Time:       Mon, 28 Oct 2024 17:19:42 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ljlb4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-ljlb4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-301254
	  Warning  Failed     2m16s                kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     92s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    64s (x3 over 3m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     29s (x3 over 2m16s)  kubelet            Error: ErrImagePull
	  Warning  Failed     29s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    6s (x4 over 2m15s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     6s (x4 over 2m15s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p functional-301254 image ls --format short --alsologtostderr: (2.246074815s)
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-301254 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-301254 image ls --format short --alsologtostderr:
I1028 17:19:50.250168  152572 out.go:345] Setting OutFile to fd 1 ...
I1028 17:19:50.250458  152572 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:19:50.250469  152572 out.go:358] Setting ErrFile to fd 2...
I1028 17:19:50.250473  152572 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:19:50.250644  152572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
I1028 17:19:50.251296  152572 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:19:50.251399  152572 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:19:50.251771  152572 cli_runner.go:164] Run: docker container inspect functional-301254 --format={{.State.Status}}
I1028 17:19:50.271162  152572 ssh_runner.go:195] Run: systemctl --version
I1028 17:19:50.271209  152572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-301254
I1028 17:19:50.289707  152572 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/functional-301254/id_rsa Username:docker}
I1028 17:19:50.380919  152572 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 17:19:52.431252  152572 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.050276541s)
W1028 17:19:52.431348  152572 cache_images.go:734] Failed to list images for profile functional-301254 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1028 17:19:52.428132    8435 remote_image.go:136] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},},}"
time="2024-10-28T17:19:52Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.25s)

                                                
                                    

Test pass (300/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 18.94
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 13.36
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.23
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1.21
21 TestBinaryMirror 1.28
22 TestOffline 49.01
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.22
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
27 TestAddons/Setup 192.43
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 10.45
35 TestAddons/parallel/Registry 16.79
37 TestAddons/parallel/InspektorGadget 11.62
40 TestAddons/parallel/CSI 63.79
41 TestAddons/parallel/Headlamp 17.58
42 TestAddons/parallel/CloudSpanner 7.11
43 TestAddons/parallel/LocalPath 16.24
44 TestAddons/parallel/NvidiaDevicePlugin 6.45
45 TestAddons/parallel/Yakd 10.81
46 TestAddons/parallel/AmdGpuDevicePlugin 6.46
47 TestAddons/StoppedEnableDisable 12.08
48 TestCertOptions 41.77
49 TestCertExpiration 221.04
51 TestForceSystemdFlag 29.08
52 TestForceSystemdEnv 28.75
54 TestKVMDriverInstallOrUpdate 3.44
58 TestErrorSpam/setup 20.32
59 TestErrorSpam/start 0.63
60 TestErrorSpam/status 0.88
61 TestErrorSpam/pause 1.51
62 TestErrorSpam/unpause 1.66
63 TestErrorSpam/stop 1.36
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 37.1
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 28.32
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.83
75 TestFunctional/serial/CacheCmd/cache/add_local 2.41
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.22
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 38.11
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.36
86 TestFunctional/serial/LogsFileCmd 1.38
87 TestFunctional/serial/InvalidService 4.36
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 11.47
91 TestFunctional/parallel/DryRun 0.42
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.35
97 TestFunctional/parallel/ServiceCmdConnect 7.78
98 TestFunctional/parallel/AddonsCmd 0.15
101 TestFunctional/parallel/SSHCmd 0.63
102 TestFunctional/parallel/CpCmd 2.22
103 TestFunctional/parallel/MySQL 26.44
104 TestFunctional/parallel/FileSync 0.36
105 TestFunctional/parallel/CertSync 1.49
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
113 TestFunctional/parallel/License 1.17
114 TestFunctional/parallel/ServiceCmd/DeployApp 9.22
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.58
116 TestFunctional/parallel/ProfileCmd/profile_list 0.72
117 TestFunctional/parallel/MountCmd/any-port 8.9
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.77
119 TestFunctional/parallel/Version/short 0.05
120 TestFunctional/parallel/Version/components 0.65
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
125 TestFunctional/parallel/ImageCommands/ImageBuild 5.22
126 TestFunctional/parallel/ImageCommands/Setup 1.74
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.24
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.09
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.67
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
134 TestFunctional/parallel/ServiceCmd/List 0.48
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.57
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.2
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
139 TestFunctional/parallel/MountCmd/specific-port 1.83
140 TestFunctional/parallel/ServiceCmd/Format 0.45
141 TestFunctional/parallel/ServiceCmd/URL 0.4
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.8
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.59
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 26.19
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 101.32
162 TestMultiControlPlane/serial/DeployApp 6.43
163 TestMultiControlPlane/serial/PingHostFromPods 1.07
164 TestMultiControlPlane/serial/AddWorkerNode 30.35
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
167 TestMultiControlPlane/serial/CopyFile 15.51
168 TestMultiControlPlane/serial/StopSecondaryNode 12.52
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
170 TestMultiControlPlane/serial/RestartSecondaryNode 22.82
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.95
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 164.35
173 TestMultiControlPlane/serial/DeleteSecondaryNode 12.21
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
175 TestMultiControlPlane/serial/StopCluster 35.52
176 TestMultiControlPlane/serial/RestartCluster 110.84
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
178 TestMultiControlPlane/serial/AddSecondaryNode 35.79
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
183 TestJSONOutput/start/Command 38.66
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.65
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.59
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.69
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
208 TestKicCustomNetwork/create_custom_network 36.32
209 TestKicCustomNetwork/use_default_bridge_network 24.69
210 TestKicExistingNetwork 25.43
211 TestKicCustomSubnet 26.69
212 TestKicStaticIP 26.05
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 48.85
217 TestMountStart/serial/StartWithMountFirst 6.13
218 TestMountStart/serial/VerifyMountFirst 0.24
219 TestMountStart/serial/StartWithMountSecond 8.82
220 TestMountStart/serial/VerifyMountSecond 0.25
221 TestMountStart/serial/DeleteFirst 1.6
222 TestMountStart/serial/VerifyMountPostDelete 0.24
223 TestMountStart/serial/Stop 1.18
224 TestMountStart/serial/RestartStopped 7.84
225 TestMountStart/serial/VerifyMountPostStop 0.24
228 TestMultiNode/serial/FreshStart2Nodes 67.72
229 TestMultiNode/serial/DeployApp2Nodes 5.72
230 TestMultiNode/serial/PingHostFrom2Pods 0.73
231 TestMultiNode/serial/AddNode 28.55
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.61
234 TestMultiNode/serial/CopyFile 8.85
235 TestMultiNode/serial/StopNode 2.12
236 TestMultiNode/serial/StartAfterStop 9.01
237 TestMultiNode/serial/RestartKeepsNodes 109.55
238 TestMultiNode/serial/DeleteNode 5.08
239 TestMultiNode/serial/StopMultiNode 23.74
240 TestMultiNode/serial/RestartMultiNode 49.81
241 TestMultiNode/serial/ValidateNameConflict 23.47
246 TestPreload 116.52
248 TestScheduledStopUnix 96.88
251 TestInsufficientStorage 9.71
252 TestRunningBinaryUpgrade 140.04
254 TestKubernetesUpgrade 333.46
255 TestMissingContainerUpgrade 94.1
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
258 TestStoppedBinaryUpgrade/Setup 2.81
266 TestNoKubernetes/serial/StartWithK8s 26.27
267 TestStoppedBinaryUpgrade/Upgrade 127.55
268 TestNoKubernetes/serial/StartWithStopK8s 9.45
269 TestNoKubernetes/serial/Start 11.44
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
271 TestNoKubernetes/serial/ProfileList 2.66
272 TestNoKubernetes/serial/Stop 1.6
273 TestNoKubernetes/serial/StartNoArgs 8.17
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
282 TestNetworkPlugins/group/false 4.06
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
288 TestPause/serial/Start 44.97
289 TestPause/serial/SecondStartNoReconfiguration 27.75
290 TestPause/serial/Pause 1.02
291 TestPause/serial/VerifyStatus 0.3
292 TestPause/serial/Unpause 0.79
293 TestPause/serial/PauseAgain 0.79
294 TestPause/serial/DeletePaused 2.67
295 TestPause/serial/VerifyDeletedResources 22.1
297 TestStartStop/group/old-k8s-version/serial/FirstStart 135.91
299 TestStartStop/group/embed-certs/serial/FirstStart 48.51
300 TestStartStop/group/embed-certs/serial/DeployApp 10.27
301 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.91
302 TestStartStop/group/embed-certs/serial/Stop 12.29
304 TestStartStop/group/no-preload/serial/FirstStart 56.75
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
306 TestStartStop/group/embed-certs/serial/SecondStart 264.17
307 TestStartStop/group/no-preload/serial/DeployApp 10.26
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.87
309 TestStartStop/group/old-k8s-version/serial/DeployApp 10.41
310 TestStartStop/group/no-preload/serial/Stop 11.86
311 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.82
312 TestStartStop/group/old-k8s-version/serial/Stop 11.95
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
314 TestStartStop/group/no-preload/serial/SecondStart 262.23
315 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
316 TestStartStop/group/old-k8s-version/serial/SecondStart 147.45
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.05
319 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.92
321 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.86
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
324 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.56
325 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
326 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
327 TestStartStop/group/old-k8s-version/serial/Pause 2.88
329 TestStartStop/group/newest-cni/serial/FirstStart 31.59
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
333 TestStartStop/group/embed-certs/serial/Pause 2.85
334 TestNetworkPlugins/group/auto/Start 38.37
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.85
337 TestStartStop/group/newest-cni/serial/Stop 3.08
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
339 TestStartStop/group/newest-cni/serial/SecondStart 13.69
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
343 TestStartStop/group/newest-cni/serial/Pause 2.86
344 TestNetworkPlugins/group/kindnet/Start 39.52
345 TestNetworkPlugins/group/auto/KubeletFlags 0.26
346 TestNetworkPlugins/group/auto/NetCatPod 10.2
347 TestNetworkPlugins/group/auto/DNS 0.13
348 TestNetworkPlugins/group/auto/Localhost 0.12
349 TestNetworkPlugins/group/auto/HairPin 0.11
350 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
351 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
353 TestNetworkPlugins/group/calico/Start 61
354 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
355 TestStartStop/group/no-preload/serial/Pause 3.07
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
357 TestNetworkPlugins/group/kindnet/NetCatPod 11.52
358 TestNetworkPlugins/group/custom-flannel/Start 50.72
359 TestNetworkPlugins/group/kindnet/DNS 0.13
360 TestNetworkPlugins/group/kindnet/Localhost 0.14
361 TestNetworkPlugins/group/kindnet/HairPin 0.11
362 TestNetworkPlugins/group/enable-default-cni/Start 37.3
363 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
364 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/calico/KubeletFlags 0.25
367 TestNetworkPlugins/group/calico/NetCatPod 10.18
368 TestNetworkPlugins/group/custom-flannel/DNS 0.13
369 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
370 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
371 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
372 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.2
373 TestNetworkPlugins/group/calico/DNS 0.15
374 TestNetworkPlugins/group/calico/Localhost 0.12
375 TestNetworkPlugins/group/calico/HairPin 0.11
376 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
377 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
378 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
379 TestNetworkPlugins/group/flannel/Start 51.15
380 TestNetworkPlugins/group/bridge/Start 42.23
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
383 TestNetworkPlugins/group/bridge/NetCatPod 11.18
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
385 TestNetworkPlugins/group/flannel/NetCatPod 11.17
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
387 TestNetworkPlugins/group/bridge/DNS 0.13
388 TestNetworkPlugins/group/bridge/Localhost 0.1
389 TestNetworkPlugins/group/bridge/HairPin 0.11
390 TestNetworkPlugins/group/flannel/DNS 0.13
391 TestNetworkPlugins/group/flannel/Localhost 0.11
392 TestNetworkPlugins/group/flannel/HairPin 0.1
393 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
394 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
395 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.14
x
+
TestDownloadOnly/v1.20.0/json-events (18.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-832962 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-832962 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (18.942642282s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (18.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1028 17:07:07.541913  108914 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1028 17:07:07.542054  108914 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-102136/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-832962
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-832962: exit status 85 (66.329166ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-832962 | jenkins | v1.34.0 | 28 Oct 24 17:06 UTC |          |
	|         | -p download-only-832962        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:06:48
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:06:48.642430  108926 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:06:48.642574  108926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:06:48.642588  108926 out.go:358] Setting ErrFile to fd 2...
	I1028 17:06:48.642595  108926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:06:48.642786  108926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
	W1028 17:06:48.642916  108926 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19872-102136/.minikube/config/config.json: open /home/jenkins/minikube-integration/19872-102136/.minikube/config/config.json: no such file or directory
	I1028 17:06:48.643497  108926 out.go:352] Setting JSON to true
	I1028 17:06:48.644533  108926 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2950,"bootTime":1730132259,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:06:48.644638  108926 start.go:139] virtualization: kvm guest
	I1028 17:06:48.647142  108926 out.go:97] [download-only-832962] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:06:48.647306  108926 notify.go:220] Checking for updates...
	W1028 17:06:48.647303  108926 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19872-102136/.minikube/cache/preloaded-tarball: no such file or directory
	I1028 17:06:48.649167  108926 out.go:169] MINIKUBE_LOCATION=19872
	I1028 17:06:48.650977  108926 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:06:48.652602  108926 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig
	I1028 17:06:48.654346  108926 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube
	I1028 17:06:48.655988  108926 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1028 17:06:48.658788  108926 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 17:06:48.659057  108926 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:06:48.681850  108926 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 17:06:48.681929  108926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 17:06:48.734236  108926 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-28 17:06:48.723065302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 17:06:48.734388  108926 docker.go:318] overlay module found
	I1028 17:06:48.736223  108926 out.go:97] Using the docker driver based on user configuration
	I1028 17:06:48.736256  108926 start.go:297] selected driver: docker
	I1028 17:06:48.736265  108926 start.go:901] validating driver "docker" against <nil>
	I1028 17:06:48.736365  108926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 17:06:48.788688  108926 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-28 17:06:48.778711828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 17:06:48.788838  108926 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 17:06:48.789872  108926 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1028 17:06:48.790122  108926 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 17:06:48.792379  108926 out.go:169] Using Docker driver with root privileges
	I1028 17:06:48.794034  108926 cni.go:84] Creating CNI manager for ""
	I1028 17:06:48.794084  108926 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 17:06:48.794096  108926 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 17:06:48.794174  108926 start.go:340] cluster config:
	{Name:download-only-832962 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-832962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:06:48.795620  108926 out.go:97] Starting "download-only-832962" primary control-plane node in "download-only-832962" cluster
	I1028 17:06:48.795641  108926 cache.go:121] Beginning downloading kic base image for docker with crio
	I1028 17:06:48.797000  108926 out.go:97] Pulling base image v0.0.45-1730110049-19872 ...
	I1028 17:06:48.797029  108926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 17:06:48.797155  108926 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 in local docker daemon
	I1028 17:06:48.813746  108926 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 to local cache
	I1028 17:06:48.813922  108926 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 in local cache directory
	I1028 17:06:48.814024  108926 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 to local cache
	I1028 17:06:49.180126  108926 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1028 17:06:49.180168  108926 cache.go:56] Caching tarball of preloaded images
	I1028 17:06:49.180343  108926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 17:06:49.182355  108926 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1028 17:06:49.182385  108926 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1028 17:06:49.284575  108926 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19872-102136/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-832962 host does not exist
	  To start a cluster, run: "minikube start -p download-only-832962"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-832962
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (13.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-328985 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-328985 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.362844053s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (13.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1028 17:07:21.319618  108914 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1028 17:07:21.319674  108914 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-102136/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-328985
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-328985: exit status 85 (69.762092ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-832962 | jenkins | v1.34.0 | 28 Oct 24 17:06 UTC |                     |
	|         | -p download-only-832962        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:07 UTC |
	| delete  | -p download-only-832962        | download-only-832962 | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:07 UTC |
	| start   | -o=json --download-only        | download-only-328985 | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC |                     |
	|         | -p download-only-328985        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:07:08
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:07:08.000144  109312 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:07:08.000265  109312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:07:08.000275  109312 out.go:358] Setting ErrFile to fd 2...
	I1028 17:07:08.000280  109312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:07:08.000511  109312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
	I1028 17:07:08.001121  109312 out.go:352] Setting JSON to true
	I1028 17:07:08.002125  109312 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2969,"bootTime":1730132259,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:07:08.002246  109312 start.go:139] virtualization: kvm guest
	I1028 17:07:08.004555  109312 out.go:97] [download-only-328985] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:07:08.004773  109312 notify.go:220] Checking for updates...
	I1028 17:07:08.006421  109312 out.go:169] MINIKUBE_LOCATION=19872
	I1028 17:07:08.008220  109312 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:07:08.010107  109312 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig
	I1028 17:07:08.011602  109312 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube
	I1028 17:07:08.012979  109312 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1028 17:07:08.015821  109312 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 17:07:08.016132  109312 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:07:08.038902  109312 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 17:07:08.038971  109312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 17:07:08.086575  109312 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:46 SystemTime:2024-10-28 17:07:08.076571551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 17:07:08.086683  109312 docker.go:318] overlay module found
	I1028 17:07:08.088552  109312 out.go:97] Using the docker driver based on user configuration
	I1028 17:07:08.088580  109312 start.go:297] selected driver: docker
	I1028 17:07:08.088586  109312 start.go:901] validating driver "docker" against <nil>
	I1028 17:07:08.088679  109312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 17:07:08.137845  109312 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:46 SystemTime:2024-10-28 17:07:08.128783571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 17:07:08.138072  109312 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 17:07:08.138826  109312 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1028 17:07:08.139096  109312 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 17:07:08.140978  109312 out.go:169] Using Docker driver with root privileges
	I1028 17:07:08.142387  109312 cni.go:84] Creating CNI manager for ""
	I1028 17:07:08.142472  109312 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 17:07:08.142491  109312 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 17:07:08.142572  109312 start.go:340] cluster config:
	{Name:download-only-328985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-328985 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:07:08.143909  109312 out.go:97] Starting "download-only-328985" primary control-plane node in "download-only-328985" cluster
	I1028 17:07:08.143935  109312 cache.go:121] Beginning downloading kic base image for docker with crio
	I1028 17:07:08.145098  109312 out.go:97] Pulling base image v0.0.45-1730110049-19872 ...
	I1028 17:07:08.145131  109312 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:07:08.145232  109312 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 in local docker daemon
	I1028 17:07:08.162660  109312 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 to local cache
	I1028 17:07:08.162821  109312 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 in local cache directory
	I1028 17:07:08.162841  109312 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 in local cache directory, skipping pull
	I1028 17:07:08.162846  109312 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 exists in cache, skipping pull
	I1028 17:07:08.162856  109312 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 as a tarball
	I1028 17:07:08.537946  109312 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 17:07:08.537984  109312 cache.go:56] Caching tarball of preloaded images
	I1028 17:07:08.538150  109312 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:07:08.540044  109312 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1028 17:07:08.540073  109312 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1028 17:07:08.642806  109312 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/19872-102136/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-328985 host does not exist
	  To start a cluster, run: "minikube start -p download-only-328985"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-328985
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.21s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-179742 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-179742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-179742
--- PASS: TestDownloadOnlyKic (1.21s)

                                                
                                    
x
+
TestBinaryMirror (1.28s)

                                                
                                                
=== RUN   TestBinaryMirror
I1028 17:07:23.240525  108914 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-988801 --alsologtostderr --binary-mirror http://127.0.0.1:35689 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-988801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-988801
--- PASS: TestBinaryMirror (1.28s)

                                                
                                    
x
+
TestOffline (49.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-588908 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-588908 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (46.790880494s)
helpers_test.go:175: Cleaning up "offline-crio-588908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-588908
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-588908: (2.214067406s)
--- PASS: TestOffline (49.01s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-803184
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-803184: exit status 85 (220.89285ms)

                                                
                                                
-- stdout --
	* Profile "addons-803184" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-803184"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-803184
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-803184: exit status 85 (220.868802ms)

                                                
                                                
-- stdout --
	* Profile "addons-803184" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-803184"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (192.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-803184 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-803184 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m12.42909876s)
--- PASS: TestAddons/Setup (192.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-803184 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-803184 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.45s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-803184 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-803184 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [033a9669-560e-47ca-b638-fcd1736b809f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [033a9669-560e-47ca-b638-fcd1736b809f] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004030135s
addons_test.go:633: (dbg) Run:  kubectl --context addons-803184 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-803184 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-803184 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.45s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.110935ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-67lgb" [9af05f14-ce81-44bb-97d1-37dedf7c187c] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002904594s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nbdps" [cd42d863-c294-464d-b7cd-95396c429181] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004191421s
addons_test.go:331: (dbg) Run:  kubectl --context addons-803184 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-803184 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-803184 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.023289524s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 ip
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.79s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.62s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2j8vf" [dacb9d35-b258-45c2-9d91-43b450a3fda4] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004452015s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-803184 addons disable inspektor-gadget --alsologtostderr -v=1: (5.615084513s)
--- PASS: TestAddons/parallel/InspektorGadget (11.62s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1028 17:10:56.275987  108914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 28.980549ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-803184 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/10/28 17:11:12 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-803184 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ccff6104-b34d-4da2-bd1d-a93db3d9d2d8] Pending
helpers_test.go:344: "task-pv-pod" [ccff6104-b34d-4da2-bd1d-a93db3d9d2d8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ccff6104-b34d-4da2-bd1d-a93db3d9d2d8] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003641239s
addons_test.go:511: (dbg) Run:  kubectl --context addons-803184 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-803184 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-803184 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-803184 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-803184 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-803184 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-803184 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c7ceff62-7918-4d9e-bb94-6e5ee2b2777b] Pending
helpers_test.go:344: "task-pv-pod-restore" [c7ceff62-7918-4d9e-bb94-6e5ee2b2777b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c7ceff62-7918-4d9e-bb94-6e5ee2b2777b] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004095391s
addons_test.go:553: (dbg) Run:  kubectl --context addons-803184 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-803184 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-803184 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-803184 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.52794685s)
--- PASS: TestAddons/parallel/CSI (63.79s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-803184 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-ssjsj" [ffd2ace1-1cd3-41b4-bf60-aff2861b4728] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-ssjsj" [ffd2ace1-1cd3-41b4-bf60-aff2861b4728] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003686475s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-803184 addons disable headlamp --alsologtostderr -v=1: (5.804030671s)
--- PASS: TestAddons/parallel/Headlamp (17.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-cb5ds" [413b8155-6752-4834-ac10-fe7b68317d3b] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003694851s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-803184 addons disable cloud-spanner --alsologtostderr -v=1: (1.100443499s)
--- PASS: TestAddons/parallel/CloudSpanner (7.11s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (16.24s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-803184 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-803184 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-803184 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [84acc0bf-55c0-43dc-be4b-879e65dc441f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [84acc0bf-55c0-43dc-be4b-879e65dc441f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [84acc0bf-55c0-43dc-be4b-879e65dc441f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.004120564s
addons_test.go:906: (dbg) Run:  kubectl --context addons-803184 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 ssh "cat /opt/local-path-provisioner/pvc-6dbabf11-4f7e-4e00-b596-30d9d2fb3ea8_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-803184 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-803184 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (16.24s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-z7q9t" [29592f17-9aa8-4d19-b8d1-dcb2278980ef] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004833039s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-s9knf" [6b6cafa8-92cc-4f20-a369-c0574bf7adfc] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004013071s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-803184 addons disable yakd --alsologtostderr -v=1: (5.807341145s)
--- PASS: TestAddons/parallel/Yakd (10.81s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.46s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-jhlpw" [f711d106-eb63-4b6b-8661-25cd70f4f3b1] Running
I1028 17:10:56.304916  108914 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1028 17:10:56.304942  108914 kapi.go:107] duration metric: took 28.971963ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.004128012s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.08s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-803184
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-803184: (11.815127298s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-803184
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-803184
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-803184
--- PASS: TestAddons/StoppedEnableDisable (12.08s)

                                                
                                    
x
+
TestCertOptions (41.77s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-878515 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-878515 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (37.06180263s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-878515 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-878515 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-878515 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-878515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-878515
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-878515: (4.051700881s)
--- PASS: TestCertOptions (41.77s)

                                                
                                    
x
+
TestCertExpiration (221.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-204547 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-204547 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.281622956s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-204547 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-204547 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.546982633s)
helpers_test.go:175: Cleaning up "cert-expiration-204547" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-204547
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-204547: (2.210566309s)
--- PASS: TestCertExpiration (221.04s)

                                                
                                    
x
+
TestForceSystemdFlag (29.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-912863 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-912863 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.49705204s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-912863 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-912863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-912863
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-912863: (2.332332726s)
--- PASS: TestForceSystemdFlag (29.08s)

                                                
                                    
x
+
TestForceSystemdEnv (28.75s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-910504 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-910504 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.741956664s)
helpers_test.go:175: Cleaning up "force-systemd-env-910504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-910504
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-910504: (4.011492515s)
--- PASS: TestForceSystemdEnv (28.75s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.44s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1028 17:47:15.525920  108914 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 17:47:15.526125  108914 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1028 17:47:15.565531  108914 install.go:62] docker-machine-driver-kvm2: exit status 1
W1028 17:47:15.565858  108914 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1028 17:47:15.565928  108914 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2057019185/001/docker-machine-driver-kvm2
I1028 17:47:15.820310  108914 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2057019185/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc000617b20 gz:0xc000617b28 tar:0xc000617ad0 tar.bz2:0xc000617ae0 tar.gz:0xc000617af0 tar.xz:0xc000617b00 tar.zst:0xc000617b10 tbz2:0xc000617ae0 tgz:0xc000617af0 txz:0xc000617b00 tzst:0xc000617b10 xz:0xc000617b40 zip:0xc000617b50 zst:0xc000617b48] Getters:map[file:0xc00232f260 http:0xc002312d20 https:0xc002312d70] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1028 17:47:15.820365  108914 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2057019185/001/docker-machine-driver-kvm2
I1028 17:47:17.673573  108914 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 17:47:17.673682  108914 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1028 17:47:17.705331  108914 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1028 17:47:17.705368  108914 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1028 17:47:17.705484  108914 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1028 17:47:17.705532  108914 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2057019185/002/docker-machine-driver-kvm2
I1028 17:47:17.767753  108914 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2057019185/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc000617b20 gz:0xc000617b28 tar:0xc000617ad0 tar.bz2:0xc000617ae0 tar.gz:0xc000617af0 tar.xz:0xc000617b00 tar.zst:0xc000617b10 tbz2:0xc000617ae0 tgz:0xc000617af0 txz:0xc000617b00 tzst:0xc000617b10 xz:0xc000617b40 zip:0xc000617b50 zst:0xc000617b48] Getters:map[file:0xc00219e210 http:0xc002312280 https:0xc0023122d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1028 17:47:17.767838  108914 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2057019185/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.44s)

                                                
                                    
x
+
TestErrorSpam/setup (20.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-967921 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-967921 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-967921 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-967921 --driver=docker  --container-runtime=crio: (20.315639693s)
--- PASS: TestErrorSpam/setup (20.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 unpause
--- PASS: TestErrorSpam/unpause (1.66s)

                                                
                                    
x
+
TestErrorSpam/stop (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 stop: (1.177607674s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-967921 --log_dir /tmp/nospam-967921 stop
--- PASS: TestErrorSpam/stop (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19872-102136/.minikube/files/etc/test/nested/copy/108914/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-301254 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-301254 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.099485813s)
--- PASS: TestFunctional/serial/StartWithProxy (37.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1028 17:18:00.234534  108914 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-301254 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-301254 --alsologtostderr -v=8: (28.323832241s)
functional_test.go:663: soft start took 28.324641795s for "functional-301254" cluster.
I1028 17:18:28.558749  108914 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (28.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-301254 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-301254 cache add registry.k8s.io/pause:3.1: (1.616390383s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-301254 cache add registry.k8s.io/pause:3.3: (1.615607218s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-301254 cache add registry.k8s.io/pause:latest: (1.598102304s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-301254 /tmp/TestFunctionalserialCacheCmdcacheadd_local2277953690/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 cache add minikube-local-cache-test:functional-301254
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-301254 cache add minikube-local-cache-test:functional-301254: (2.060082206s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 cache delete minikube-local-cache-test:functional-301254
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-301254
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-301254 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (261.914892ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-301254 cache reload: (1.407049796s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 kubectl -- --context functional-301254 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-301254 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-301254 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-301254 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.107430195s)
functional_test.go:761: restart took 38.107559785s for "functional-301254" cluster.
I1028 17:19:16.964217  108914 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (38.11s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-301254 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-301254 logs: (1.360426915s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 logs --file /tmp/TestFunctionalserialLogsFileCmd3486598784/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-301254 logs --file /tmp/TestFunctionalserialLogsFileCmd3486598784/001/logs.txt: (1.380774631s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-301254 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-301254
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-301254: exit status 115 (325.407683ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32350 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-301254 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-301254 config get cpus: exit status 14 (120.591741ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-301254 config get cpus: exit status 14 (93.887472ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-301254 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-301254 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 148475: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-301254 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-301254 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (199.534193ms)

                                                
                                                
-- stdout --
	* [functional-301254] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 17:19:27.650736  147081 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:19:27.650879  147081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:19:27.650888  147081 out.go:358] Setting ErrFile to fd 2...
	I1028 17:19:27.650894  147081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:19:27.651226  147081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
	I1028 17:19:27.652011  147081 out.go:352] Setting JSON to false
	I1028 17:19:27.653782  147081 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3709,"bootTime":1730132259,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:19:27.653900  147081 start.go:139] virtualization: kvm guest
	I1028 17:19:27.657696  147081 out.go:177] * [functional-301254] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:19:27.659623  147081 notify.go:220] Checking for updates...
	I1028 17:19:27.659717  147081 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:19:27.661603  147081 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:19:27.663435  147081 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig
	I1028 17:19:27.665989  147081 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube
	I1028 17:19:27.667769  147081 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:19:27.669388  147081 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:19:27.671660  147081 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:19:27.672525  147081 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:19:27.704980  147081 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 17:19:27.705104  147081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 17:19:27.772731  147081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-28 17:19:27.752274098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 17:19:27.772919  147081 docker.go:318] overlay module found
	I1028 17:19:27.774913  147081 out.go:177] * Using the docker driver based on existing profile
	I1028 17:19:27.778231  147081 start.go:297] selected driver: docker
	I1028 17:19:27.778266  147081 start.go:901] validating driver "docker" against &{Name:functional-301254 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-301254 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:19:27.778403  147081 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:19:27.780971  147081 out.go:201] 
	W1028 17:19:27.782462  147081 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1028 17:19:27.783947  147081 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-301254 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-301254 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-301254 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (176.364031ms)

                                                
                                                
-- stdout --
	* [functional-301254] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 17:19:28.070647  147327 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:19:28.070751  147327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:19:28.070761  147327 out.go:358] Setting ErrFile to fd 2...
	I1028 17:19:28.070766  147327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:19:28.071024  147327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
	I1028 17:19:28.071579  147327 out.go:352] Setting JSON to false
	I1028 17:19:28.072545  147327 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3709,"bootTime":1730132259,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:19:28.072660  147327 start.go:139] virtualization: kvm guest
	I1028 17:19:28.074933  147327 out.go:177] * [functional-301254] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1028 17:19:28.077120  147327 notify.go:220] Checking for updates...
	I1028 17:19:28.077201  147327 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:19:28.079141  147327 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:19:28.080922  147327 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig
	I1028 17:19:28.083050  147327 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube
	I1028 17:19:28.084551  147327 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:19:28.086189  147327 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:19:28.087754  147327 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:19:28.088421  147327 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:19:28.116120  147327 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 17:19:28.116236  147327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 17:19:28.175538  147327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:59 SystemTime:2024-10-28 17:19:28.16580616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 17:19:28.175637  147327 docker.go:318] overlay module found
	I1028 17:19:28.177343  147327 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1028 17:19:28.178649  147327 start.go:297] selected driver: docker
	I1028 17:19:28.178689  147327 start.go:901] validating driver "docker" against &{Name:functional-301254 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-301254 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:19:28.178822  147327 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:19:28.181156  147327 out.go:201] 
	W1028 17:19:28.182419  147327 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1028 17:19:28.183685  147327 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-301254 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-301254 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-kd7x9" [bb704486-a7d3-4fbc-9a51-45ba974933f0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-kd7x9" [bb704486-a7d3-4fbc-9a51-45ba974933f0] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003978293s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30864
functional_test.go:1675: http://192.168.49.2:30864: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-kd7x9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30864
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh -n functional-301254 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 cp functional-301254:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4231199026/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh -n functional-301254 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh -n functional-301254 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-301254 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-822lv" [db8e7eb5-f56a-446b-a2d2-55dd3187ce36] Pending
helpers_test.go:344: "mysql-6cdb49bbb-822lv" [db8e7eb5-f56a-446b-a2d2-55dd3187ce36] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-822lv" [db8e7eb5-f56a-446b-a2d2-55dd3187ce36] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.00568104s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-301254 exec mysql-6cdb49bbb-822lv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-301254 exec mysql-6cdb49bbb-822lv -- mysql -ppassword -e "show databases;": exit status 1 (152.988002ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 17:19:58.533336  108914 retry.go:31] will retry after 731.765467ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-301254 exec mysql-6cdb49bbb-822lv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-301254 exec mysql-6cdb49bbb-822lv -- mysql -ppassword -e "show databases;": exit status 1 (101.815635ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 17:19:59.367933  108914 retry.go:31] will retry after 1.660325458s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-301254 exec mysql-6cdb49bbb-822lv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-301254 exec mysql-6cdb49bbb-822lv -- mysql -ppassword -e "show databases;": exit status 1 (104.623973ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 17:20:01.134049  108914 retry.go:31] will retry after 1.356929185s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-301254 exec mysql-6cdb49bbb-822lv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/108914/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "sudo cat /etc/test/nested/copy/108914/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/108914.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "sudo cat /etc/ssl/certs/108914.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/108914.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "sudo cat /usr/share/ca-certificates/108914.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/1089142.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "sudo cat /etc/ssl/certs/1089142.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/1089142.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "sudo cat /usr/share/ca-certificates/1089142.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-301254 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-301254 ssh "sudo systemctl is-active docker": exit status 1 (296.967722ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-301254 ssh "sudo systemctl is-active containerd": exit status 1 (297.098949ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Done: out/minikube-linux-amd64 license: (1.173373899s)
--- PASS: TestFunctional/parallel/License (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-301254 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-301254 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-bs2m4" [526df3a6-7356-41a1-b631-15cd8d6d03d3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-bs2m4" [526df3a6-7356-41a1-b631-15cd8d6d03d3] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003749217s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "629.996616ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "92.07785ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-301254 /tmp/TestFunctionalparallelMountCmdany-port1827821772/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1730135965964855689" to /tmp/TestFunctionalparallelMountCmdany-port1827821772/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1730135965964855689" to /tmp/TestFunctionalparallelMountCmdany-port1827821772/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1730135965964855689" to /tmp/TestFunctionalparallelMountCmdany-port1827821772/001/test-1730135965964855689
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-301254 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (378.907648ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 17:19:26.344151  108914 retry.go:31] will retry after 374.994936ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 28 17:19 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 28 17:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 28 17:19 test-1730135965964855689
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh cat /mount-9p/test-1730135965964855689
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-301254 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7a7cb15e-acb2-49a3-8e31-ad5146d68aca] Pending
helpers_test.go:344: "busybox-mount" [7a7cb15e-acb2-49a3-8e31-ad5146d68aca] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7a7cb15e-acb2-49a3-8e31-ad5146d68aca] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7a7cb15e-acb2-49a3-8e31-ad5146d68aca] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004276501s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-301254 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-301254 /tmp/TestFunctionalparallelMountCmdany-port1827821772/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.90s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "689.44184ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "83.877243ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-301254 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-301254  | c24dfb32e70dc | 3.33kB |
| localhost/my-image                      | functional-301254  | 8fd021261055e | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-301254  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | alpine             | cb8f91112b6b5 | 48.4MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-301254 image ls --format table --alsologtostderr:
I1028 17:19:58.270935  153585 out.go:345] Setting OutFile to fd 1 ...
I1028 17:19:58.271347  153585 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:19:58.271364  153585 out.go:358] Setting ErrFile to fd 2...
I1028 17:19:58.271372  153585 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:19:58.271647  153585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
I1028 17:19:58.272538  153585 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:19:58.272708  153585 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:19:58.273355  153585 cli_runner.go:164] Run: docker container inspect functional-301254 --format={{.State.Status}}
I1028 17:19:58.294480  153585 ssh_runner.go:195] Run: systemctl --version
I1028 17:19:58.294544  153585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-301254
I1028 17:19:58.313047  153585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/functional-301254/id_rsa Username:docker}
I1028 17:19:58.400821  153585 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-301254 image ls --format json --alsologtostderr:
[{"id":"0ec3b6157c879c29dc4c41a7df4f96bc9b20313831b912e97bd873043b0c483d","repoDigests":["docker.io/library/f088d0c40cb89e26ff44adbe0e144e41112a2c470ffbbb651a3882331bd61ed4-tmp@sha256:90f4b113ff2888d2d0889bcc8a247819fd1676fc43fc71910a6b07258f0bd914"],"repoTags":[],"size":"1465612"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"da86e
6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube
/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-301254"],"size":"4943877"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io
/echoserver:1.8"],"size":"97846543"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager
:v1.31.2"],"size":"89474374"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250","docker.io/library/nginx@sha256:ae136e431e76e12e5d84979ea5e2ffff4dd9589c2435c8bb9e33e6c3960111d3"],"repoTags":["docker.io/l
ibrary/nginx:alpine"],"size":"48414943"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c24dfb32e70dc69e2a9810a5950f692797b1ebde4073f37dd36c0e65878c76b6","repoDigests":["localhost/minikube-local-cache-test@sha256:417523314eb5583d778bfa827c90dfe3b7a142bb1b2548df8a7f45e1ca82a856"],"repoTags":["localhost/minikube-local-cache-test:functional-301254"],"size":"3330"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTa
gs":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"8fd021261055e263a5eeca3de8ec8d9ab6f5580a013e46fadc0b2cca7e13038c","repoDigests":["localhost/my-image@sha256:4346d21d5c32c48c78ed69160c7ced56c6a7cadd8ac8fdd63f4e1f5d52577c2d"],"repoTags":["localhost/my-image:functional-301254"],"size
":"1468193"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-301254 image ls --format json --alsologtostderr:
I1028 17:19:58.030708  153532 out.go:345] Setting OutFile to fd 1 ...
I1028 17:19:58.030854  153532 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:19:58.030867  153532 out.go:358] Setting ErrFile to fd 2...
I1028 17:19:58.030873  153532 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:19:58.031236  153532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
I1028 17:19:58.032085  153532 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:19:58.032252  153532 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:19:58.032777  153532 cli_runner.go:164] Run: docker container inspect functional-301254 --format={{.State.Status}}
I1028 17:19:58.052966  153532 ssh_runner.go:195] Run: systemctl --version
I1028 17:19:58.053017  153532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-301254
I1028 17:19:58.072073  153532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/functional-301254/id_rsa Username:docker}
I1028 17:19:58.156503  153532 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-301254 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-301254
size: "4943877"
- id: c24dfb32e70dc69e2a9810a5950f692797b1ebde4073f37dd36c0e65878c76b6
repoDigests:
- localhost/minikube-local-cache-test@sha256:417523314eb5583d778bfa827c90dfe3b7a142bb1b2548df8a7f45e1ca82a856
repoTags:
- localhost/minikube-local-cache-test:functional-301254
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-301254 image ls --format yaml --alsologtostderr:
I1028 17:19:52.499839  152660 out.go:345] Setting OutFile to fd 1 ...
I1028 17:19:52.499979  152660 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:19:52.499992  152660 out.go:358] Setting ErrFile to fd 2...
I1028 17:19:52.500000  152660 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:19:52.500236  152660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
I1028 17:19:52.500963  152660 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:19:52.501112  152660 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:19:52.501678  152660 cli_runner.go:164] Run: docker container inspect functional-301254 --format={{.State.Status}}
I1028 17:19:52.518818  152660 ssh_runner.go:195] Run: systemctl --version
I1028 17:19:52.518869  152660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-301254
I1028 17:19:52.537046  152660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/functional-301254/id_rsa Username:docker}
I1028 17:19:52.624029  152660 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-301254 ssh pgrep buildkitd: exit status 1 (253.386304ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image build -t localhost/my-image:functional-301254 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-301254 image build -t localhost/my-image:functional-301254 testdata/build --alsologtostderr: (4.754400069s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-301254 image build -t localhost/my-image:functional-301254 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0ec3b6157c8
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-301254
--> 8fd02126105
Successfully tagged localhost/my-image:functional-301254
8fd021261055e263a5eeca3de8ec8d9ab6f5580a013e46fadc0b2cca7e13038c
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-301254 image build -t localhost/my-image:functional-301254 testdata/build --alsologtostderr:
I1028 17:19:53.061157  152893 out.go:345] Setting OutFile to fd 1 ...
I1028 17:19:53.061320  152893 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:19:53.061331  152893 out.go:358] Setting ErrFile to fd 2...
I1028 17:19:53.061335  152893 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:19:53.061504  152893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
I1028 17:19:53.062123  152893 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:19:53.062645  152893 config.go:182] Loaded profile config "functional-301254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:19:53.063070  152893 cli_runner.go:164] Run: docker container inspect functional-301254 --format={{.State.Status}}
I1028 17:19:53.080637  152893 ssh_runner.go:195] Run: systemctl --version
I1028 17:19:53.080691  152893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-301254
I1028 17:19:53.098696  152893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/functional-301254/id_rsa Username:docker}
I1028 17:19:53.180204  152893 build_images.go:161] Building image from path: /tmp/build.1273206773.tar
I1028 17:19:53.180276  152893 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1028 17:19:53.189140  152893 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1273206773.tar
I1028 17:19:53.192533  152893 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1273206773.tar: stat -c "%s %y" /var/lib/minikube/build/build.1273206773.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1273206773.tar': No such file or directory
I1028 17:19:53.192568  152893 ssh_runner.go:362] scp /tmp/build.1273206773.tar --> /var/lib/minikube/build/build.1273206773.tar (3072 bytes)
I1028 17:19:53.215676  152893 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1273206773
I1028 17:19:53.224246  152893 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1273206773 -xf /var/lib/minikube/build/build.1273206773.tar
I1028 17:19:53.233206  152893 crio.go:315] Building image: /var/lib/minikube/build/build.1273206773
I1028 17:19:53.233271  152893 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-301254 /var/lib/minikube/build/build.1273206773 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1028 17:19:57.740543  152893 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-301254 /var/lib/minikube/build/build.1273206773 --cgroup-manager=cgroupfs: (4.507236062s)
I1028 17:19:57.740631  152893 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1273206773
I1028 17:19:57.749595  152893 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1273206773.tar
I1028 17:19:57.758149  152893 build_images.go:217] Built localhost/my-image:functional-301254 from /tmp/build.1273206773.tar
I1028 17:19:57.758188  152893 build_images.go:133] succeeded building to: functional-301254
I1028 17:19:57.758195  152893 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.716232633s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-301254
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image load --daemon kicbase/echo-server:functional-301254 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-301254 image load --daemon kicbase/echo-server:functional-301254 --alsologtostderr: (1.009136728s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image load --daemon kicbase/echo-server:functional-301254 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-301254
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image load --daemon kicbase/echo-server:functional-301254 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image save kicbase/echo-server:functional-301254 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image rm kicbase/echo-server:functional-301254 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 service list -o json
functional_test.go:1494: Took "567.08204ms" to run "out/minikube-linux-amd64 -p functional-301254 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30703
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-301254 /tmp/TestFunctionalparallelMountCmdspecific-port3698285451/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-301254 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.093558ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 17:19:35.240038  108914 retry.go:31] will retry after 302.7745ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-301254 /tmp/TestFunctionalparallelMountCmdspecific-port3698285451/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-301254 ssh "sudo umount -f /mount-9p": exit status 1 (327.01785ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-301254 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-301254 /tmp/TestFunctionalparallelMountCmdspecific-port3698285451/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30703
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-301254
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 image save --daemon kicbase/echo-server:functional-301254 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-301254
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-301254 /tmp/TestFunctionalparallelMountCmdVerifyCleanup318751788/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-301254 /tmp/TestFunctionalparallelMountCmdVerifyCleanup318751788/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-301254 /tmp/TestFunctionalparallelMountCmdVerifyCleanup318751788/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-301254 ssh "findmnt -T" /mount1: exit status 1 (346.47392ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 17:19:37.042137  108914 retry.go:31] will retry after 344.67083ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-301254 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-301254 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-301254 /tmp/TestFunctionalparallelMountCmdVerifyCleanup318751788/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-301254 /tmp/TestFunctionalparallelMountCmdVerifyCleanup318751788/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-301254 /tmp/TestFunctionalparallelMountCmdVerifyCleanup318751788/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-301254 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-301254 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-301254 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-301254 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 151519: os: process already finished
helpers_test.go:508: unable to kill pid 151289: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-301254 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (26.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-301254 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4c904adc-bd2e-4258-aa69-06e379afb968] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
2024/10/28 17:19:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "nginx-svc" [4c904adc-bd2e-4258-aa69-06e379afb968] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 26.003995159s
I1028 17:20:04.975208  108914 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (26.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-301254 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.38.29 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-301254 tunnel --alsologtostderr] ...
E1028 17:20:37.740366  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:37.746812  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:37.758278  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:37.779775  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:37.821309  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:37.902838  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:38.064532  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:38.386305  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:39.027817  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:40.309815  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:42.871304  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:47.993347  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:58.235562  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:21:18.717747  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:21:59.679713  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-301254
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-301254
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-301254
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (101.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-949926 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1028 17:23:21.601290  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:24:24.351339  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:24:24.358284  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:24:24.370337  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:24:24.391828  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:24:24.433306  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:24:24.514804  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:24:24.676761  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:24:24.998513  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:24:25.640299  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:24:26.922553  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-949926 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m40.628637387s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 status -v=7 --alsologtostderr
E1028 17:24:29.484476  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/StartCluster (101.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-949926 -- rollout status deployment/busybox: (4.486980361s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- get pods -o jsonpath='{.items[*].metadata.name}'
E1028 17:24:34.605823  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- exec busybox-7dff88458-2d8tk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- exec busybox-7dff88458-652cd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- exec busybox-7dff88458-qxtqd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- exec busybox-7dff88458-2d8tk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- exec busybox-7dff88458-652cd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- exec busybox-7dff88458-qxtqd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- exec busybox-7dff88458-2d8tk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- exec busybox-7dff88458-652cd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- exec busybox-7dff88458-qxtqd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- exec busybox-7dff88458-2d8tk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- exec busybox-7dff88458-2d8tk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- exec busybox-7dff88458-652cd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- exec busybox-7dff88458-652cd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- exec busybox-7dff88458-qxtqd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-949926 -- exec busybox-7dff88458-qxtqd -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-949926 -v=7 --alsologtostderr
E1028 17:24:44.847632  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:25:05.329150  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-949926 -v=7 --alsologtostderr: (29.522357361s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-949926 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp testdata/cp-test.txt ha-949926:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2864861247/001/cp-test_ha-949926.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926:/home/docker/cp-test.txt ha-949926-m02:/home/docker/cp-test_ha-949926_ha-949926-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m02 "sudo cat /home/docker/cp-test_ha-949926_ha-949926-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926:/home/docker/cp-test.txt ha-949926-m03:/home/docker/cp-test_ha-949926_ha-949926-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m03 "sudo cat /home/docker/cp-test_ha-949926_ha-949926-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926:/home/docker/cp-test.txt ha-949926-m04:/home/docker/cp-test_ha-949926_ha-949926-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m04 "sudo cat /home/docker/cp-test_ha-949926_ha-949926-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp testdata/cp-test.txt ha-949926-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2864861247/001/cp-test_ha-949926-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926-m02:/home/docker/cp-test.txt ha-949926:/home/docker/cp-test_ha-949926-m02_ha-949926.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926 "sudo cat /home/docker/cp-test_ha-949926-m02_ha-949926.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926-m02:/home/docker/cp-test.txt ha-949926-m03:/home/docker/cp-test_ha-949926-m02_ha-949926-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m03 "sudo cat /home/docker/cp-test_ha-949926-m02_ha-949926-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926-m02:/home/docker/cp-test.txt ha-949926-m04:/home/docker/cp-test_ha-949926-m02_ha-949926-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m04 "sudo cat /home/docker/cp-test_ha-949926-m02_ha-949926-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp testdata/cp-test.txt ha-949926-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2864861247/001/cp-test_ha-949926-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926-m03:/home/docker/cp-test.txt ha-949926:/home/docker/cp-test_ha-949926-m03_ha-949926.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926 "sudo cat /home/docker/cp-test_ha-949926-m03_ha-949926.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926-m03:/home/docker/cp-test.txt ha-949926-m02:/home/docker/cp-test_ha-949926-m03_ha-949926-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m02 "sudo cat /home/docker/cp-test_ha-949926-m03_ha-949926-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926-m03:/home/docker/cp-test.txt ha-949926-m04:/home/docker/cp-test_ha-949926-m03_ha-949926-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m04 "sudo cat /home/docker/cp-test_ha-949926-m03_ha-949926-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp testdata/cp-test.txt ha-949926-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2864861247/001/cp-test_ha-949926-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926-m04:/home/docker/cp-test.txt ha-949926:/home/docker/cp-test_ha-949926-m04_ha-949926.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926 "sudo cat /home/docker/cp-test_ha-949926-m04_ha-949926.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926-m04:/home/docker/cp-test.txt ha-949926-m02:/home/docker/cp-test_ha-949926-m04_ha-949926-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m02 "sudo cat /home/docker/cp-test_ha-949926-m04_ha-949926-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 cp ha-949926-m04:/home/docker/cp-test.txt ha-949926-m03:/home/docker/cp-test_ha-949926-m04_ha-949926-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 ssh -n ha-949926-m03 "sudo cat /home/docker/cp-test_ha-949926-m04_ha-949926-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-949926 node stop m02 -v=7 --alsologtostderr: (11.859613731s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-949926 status -v=7 --alsologtostderr: exit status 7 (656.791534ms)

                                                
                                                
-- stdout --
	ha-949926
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-949926-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949926-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-949926-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 17:25:35.898673  175525 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:25:35.898795  175525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:25:35.898805  175525 out.go:358] Setting ErrFile to fd 2...
	I1028 17:25:35.898809  175525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:25:35.899002  175525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
	I1028 17:25:35.899203  175525 out.go:352] Setting JSON to false
	I1028 17:25:35.899233  175525 mustload.go:65] Loading cluster: ha-949926
	I1028 17:25:35.899280  175525 notify.go:220] Checking for updates...
	I1028 17:25:35.899688  175525 config.go:182] Loaded profile config "ha-949926": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:35.899711  175525 status.go:174] checking status of ha-949926 ...
	I1028 17:25:35.900232  175525 cli_runner.go:164] Run: docker container inspect ha-949926 --format={{.State.Status}}
	I1028 17:25:35.917941  175525 status.go:371] ha-949926 host status = "Running" (err=<nil>)
	I1028 17:25:35.917968  175525 host.go:66] Checking if "ha-949926" exists ...
	I1028 17:25:35.918287  175525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-949926
	I1028 17:25:35.937092  175525 host.go:66] Checking if "ha-949926" exists ...
	I1028 17:25:35.937465  175525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 17:25:35.937564  175525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-949926
	I1028 17:25:35.956232  175525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/ha-949926/id_rsa Username:docker}
	I1028 17:25:36.044901  175525 ssh_runner.go:195] Run: systemctl --version
	I1028 17:25:36.049366  175525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:25:36.060659  175525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 17:25:36.113501  175525 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:73 SystemTime:2024-10-28 17:25:36.103201508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 17:25:36.114707  175525 kubeconfig.go:125] found "ha-949926" server: "https://192.168.49.254:8443"
	I1028 17:25:36.114762  175525 api_server.go:166] Checking apiserver status ...
	I1028 17:25:36.114813  175525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:25:36.126474  175525 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1497/cgroup
	I1028 17:25:36.135357  175525 api_server.go:182] apiserver freezer: "2:freezer:/docker/731f94f20d6c43aa9dc955b1b93581e8da8253b658e3cefdf9eb54945c32dcc0/crio/crio-e2c3da067f32fd85673d02f47af82f738a0ce2f6ee954520284b75a4b9e0f6c8"
	I1028 17:25:36.135410  175525 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/731f94f20d6c43aa9dc955b1b93581e8da8253b658e3cefdf9eb54945c32dcc0/crio/crio-e2c3da067f32fd85673d02f47af82f738a0ce2f6ee954520284b75a4b9e0f6c8/freezer.state
	I1028 17:25:36.144500  175525 api_server.go:204] freezer state: "THAWED"
	I1028 17:25:36.144534  175525 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1028 17:25:36.148749  175525 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1028 17:25:36.148786  175525 status.go:463] ha-949926 apiserver status = Running (err=<nil>)
	I1028 17:25:36.148798  175525 status.go:176] ha-949926 status: &{Name:ha-949926 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 17:25:36.148817  175525 status.go:174] checking status of ha-949926-m02 ...
	I1028 17:25:36.149053  175525 cli_runner.go:164] Run: docker container inspect ha-949926-m02 --format={{.State.Status}}
	I1028 17:25:36.167270  175525 status.go:371] ha-949926-m02 host status = "Stopped" (err=<nil>)
	I1028 17:25:36.167297  175525 status.go:384] host is not running, skipping remaining checks
	I1028 17:25:36.167305  175525 status.go:176] ha-949926-m02 status: &{Name:ha-949926-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 17:25:36.167331  175525 status.go:174] checking status of ha-949926-m03 ...
	I1028 17:25:36.167647  175525 cli_runner.go:164] Run: docker container inspect ha-949926-m03 --format={{.State.Status}}
	I1028 17:25:36.188132  175525 status.go:371] ha-949926-m03 host status = "Running" (err=<nil>)
	I1028 17:25:36.188165  175525 host.go:66] Checking if "ha-949926-m03" exists ...
	I1028 17:25:36.188402  175525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-949926-m03
	I1028 17:25:36.206291  175525 host.go:66] Checking if "ha-949926-m03" exists ...
	I1028 17:25:36.206557  175525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 17:25:36.206607  175525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-949926-m03
	I1028 17:25:36.223867  175525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/ha-949926-m03/id_rsa Username:docker}
	I1028 17:25:36.308837  175525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:25:36.320123  175525 kubeconfig.go:125] found "ha-949926" server: "https://192.168.49.254:8443"
	I1028 17:25:36.320154  175525 api_server.go:166] Checking apiserver status ...
	I1028 17:25:36.320187  175525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:25:36.330343  175525 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1394/cgroup
	I1028 17:25:36.340131  175525 api_server.go:182] apiserver freezer: "2:freezer:/docker/df46f1ca3f2c0022461560fdbf2926b26104613b5f298ed4c1c2b01737865a42/crio/crio-d7e0f6153a270618c734d7705fad861cd9c06a9e11fb079fc00e90321594a2e7"
	I1028 17:25:36.340205  175525 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/df46f1ca3f2c0022461560fdbf2926b26104613b5f298ed4c1c2b01737865a42/crio/crio-d7e0f6153a270618c734d7705fad861cd9c06a9e11fb079fc00e90321594a2e7/freezer.state
	I1028 17:25:36.348032  175525 api_server.go:204] freezer state: "THAWED"
	I1028 17:25:36.348078  175525 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1028 17:25:36.352296  175525 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1028 17:25:36.352324  175525 status.go:463] ha-949926-m03 apiserver status = Running (err=<nil>)
	I1028 17:25:36.352336  175525 status.go:176] ha-949926-m03 status: &{Name:ha-949926-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 17:25:36.352358  175525 status.go:174] checking status of ha-949926-m04 ...
	I1028 17:25:36.352633  175525 cli_runner.go:164] Run: docker container inspect ha-949926-m04 --format={{.State.Status}}
	I1028 17:25:36.370561  175525 status.go:371] ha-949926-m04 host status = "Running" (err=<nil>)
	I1028 17:25:36.370593  175525 host.go:66] Checking if "ha-949926-m04" exists ...
	I1028 17:25:36.370829  175525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-949926-m04
	I1028 17:25:36.388547  175525 host.go:66] Checking if "ha-949926-m04" exists ...
	I1028 17:25:36.388831  175525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 17:25:36.388870  175525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-949926-m04
	I1028 17:25:36.405659  175525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/ha-949926-m04/id_rsa Username:docker}
	I1028 17:25:36.492868  175525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:25:36.503343  175525 status.go:176] ha-949926-m04 status: &{Name:ha-949926-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 node start m02 -v=7 --alsologtostderr
E1028 17:25:37.740419  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:25:46.291233  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-949926 node start m02 -v=7 --alsologtostderr: (21.678653638s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-949926 status -v=7 --alsologtostderr: (1.060150726s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (164.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-949926 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-949926 -v=7 --alsologtostderr
E1028 17:26:05.443406  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-949926 -v=7 --alsologtostderr: (26.463384019s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-949926 --wait=true -v=7 --alsologtostderr
E1028 17:27:08.213420  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-949926 --wait=true -v=7 --alsologtostderr: (2m17.781200086s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-949926
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (164.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-949926 node delete m03 -v=7 --alsologtostderr: (11.399437546s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 stop -v=7 --alsologtostderr
E1028 17:29:24.351219  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-949926 stop -v=7 --alsologtostderr: (35.412475782s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-949926 status -v=7 --alsologtostderr: exit status 7 (107.143961ms)

                                                
                                                
-- stdout --
	ha-949926
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949926-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949926-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 17:29:33.642845  193013 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:29:33.643152  193013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:29:33.643162  193013 out.go:358] Setting ErrFile to fd 2...
	I1028 17:29:33.643166  193013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:29:33.643342  193013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
	I1028 17:29:33.643506  193013 out.go:352] Setting JSON to false
	I1028 17:29:33.643535  193013 mustload.go:65] Loading cluster: ha-949926
	I1028 17:29:33.643607  193013 notify.go:220] Checking for updates...
	I1028 17:29:33.644490  193013 config.go:182] Loaded profile config "ha-949926": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:29:33.644580  193013 status.go:174] checking status of ha-949926 ...
	I1028 17:29:33.645787  193013 cli_runner.go:164] Run: docker container inspect ha-949926 --format={{.State.Status}}
	I1028 17:29:33.664660  193013 status.go:371] ha-949926 host status = "Stopped" (err=<nil>)
	I1028 17:29:33.664683  193013 status.go:384] host is not running, skipping remaining checks
	I1028 17:29:33.664690  193013 status.go:176] ha-949926 status: &{Name:ha-949926 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 17:29:33.664719  193013 status.go:174] checking status of ha-949926-m02 ...
	I1028 17:29:33.664974  193013 cli_runner.go:164] Run: docker container inspect ha-949926-m02 --format={{.State.Status}}
	I1028 17:29:33.682546  193013 status.go:371] ha-949926-m02 host status = "Stopped" (err=<nil>)
	I1028 17:29:33.682587  193013 status.go:384] host is not running, skipping remaining checks
	I1028 17:29:33.682598  193013 status.go:176] ha-949926-m02 status: &{Name:ha-949926-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 17:29:33.682637  193013 status.go:174] checking status of ha-949926-m04 ...
	I1028 17:29:33.682937  193013 cli_runner.go:164] Run: docker container inspect ha-949926-m04 --format={{.State.Status}}
	I1028 17:29:33.700669  193013 status.go:371] ha-949926-m04 host status = "Stopped" (err=<nil>)
	I1028 17:29:33.700693  193013 status.go:384] host is not running, skipping remaining checks
	I1028 17:29:33.700701  193013 status.go:176] ha-949926-m04 status: &{Name:ha-949926-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (110.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-949926 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1028 17:29:52.056087  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:30:37.740975  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-949926 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m50.071212862s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (110.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-949926 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-949926 --control-plane -v=7 --alsologtostderr: (34.970730616s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-949926 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.66s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-122952 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-122952 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (38.65777305s)
--- PASS: TestJSONOutput/start/Command (38.66s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-122952 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-122952 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-122952 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-122952 --output=json --user=testUser: (5.689265127s)
--- PASS: TestJSONOutput/stop/Command (5.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-854447 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-854447 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (73.386523ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"24738676-b31f-44b2-abcb-0e694b62dcfb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-854447] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e30a796-2458-42be-b608-df860d1f6270","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19872"}}
	{"specversion":"1.0","id":"cf784d0f-180d-4cb9-8081-dc45cb1d4572","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e1b4ee2e-cf85-475d-a960-4f1cab984079","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig"}}
	{"specversion":"1.0","id":"4147835a-23cb-4d4a-bd4f-7e1da7b1c135","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube"}}
	{"specversion":"1.0","id":"2d41bf2c-9d62-4ed6-82a7-06162cdb8925","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"81b504c7-09a4-4cbe-af56-f3cf7822f84e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"242b5bf8-2aa1-43a7-8364-fd88fdbe1ecf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-854447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-854447
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-742659 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-742659 --network=: (34.272281719s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-742659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-742659
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-742659: (2.0301209s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.32s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.69s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-173305 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-173305 --network=bridge: (22.813611829s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-173305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-173305
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-173305: (1.856084441s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.69s)

                                                
                                    
x
+
TestKicExistingNetwork (25.43s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1028 17:34:00.390469  108914 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1028 17:34:00.408441  108914 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1028 17:34:00.408507  108914 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1028 17:34:00.408525  108914 cli_runner.go:164] Run: docker network inspect existing-network
W1028 17:34:00.424845  108914 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1028 17:34:00.424876  108914 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1028 17:34:00.424894  108914 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1028 17:34:00.425001  108914 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1028 17:34:00.447113  108914 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-357e3880c032 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:08:14:8e:45} reservation:<nil>}
I1028 17:34:00.447870  108914 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0008093d0}
I1028 17:34:00.447921  108914 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1028 17:34:00.447976  108914 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1028 17:34:00.514386  108914 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-410240 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-410240 --network=existing-network: (23.40137889s)
helpers_test.go:175: Cleaning up "existing-network-410240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-410240
E1028 17:34:24.350844  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-410240: (1.868624506s)
I1028 17:34:25.801867  108914 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.43s)

                                                
                                    
x
+
TestKicCustomSubnet (26.69s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-541896 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-541896 --subnet=192.168.60.0/24: (24.59449448s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-541896 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-541896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-541896
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-541896: (2.079640672s)
--- PASS: TestKicCustomSubnet (26.69s)

                                                
                                    
x
+
TestKicStaticIP (26.05s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-110528 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-110528 --static-ip=192.168.200.200: (23.918466168s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-110528 ip
helpers_test.go:175: Cleaning up "static-ip-110528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-110528
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-110528: (2.001210272s)
--- PASS: TestKicStaticIP (26.05s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (48.85s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-315132 --driver=docker  --container-runtime=crio
E1028 17:35:37.741225  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-315132 --driver=docker  --container-runtime=crio: (22.202931287s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-337217 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-337217 --driver=docker  --container-runtime=crio: (21.400815411s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-315132
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-337217
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-337217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-337217
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-337217: (1.874796709s)
helpers_test.go:175: Cleaning up "first-315132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-315132
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-315132: (2.204319486s)
--- PASS: TestMinikubeProfile (48.85s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-652633 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-652633 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.128732938s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-652633 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-665220 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-665220 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.816426972s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-665220 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-652633 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-652633 --alsologtostderr -v=5: (1.604501033s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-665220 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-665220
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-665220: (1.17854289s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.84s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-665220
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-665220: (6.838140218s)
--- PASS: TestMountStart/serial/RestartStopped (7.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-665220 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-258974 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1028 17:37:00.805651  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-258974 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m7.258641913s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-258974 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-258974 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-258974 -- rollout status deployment/busybox: (4.319176461s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-258974 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-258974 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-258974 -- exec busybox-7dff88458-7xz97 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-258974 -- exec busybox-7dff88458-n4476 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-258974 -- exec busybox-7dff88458-7xz97 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-258974 -- exec busybox-7dff88458-n4476 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-258974 -- exec busybox-7dff88458-7xz97 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-258974 -- exec busybox-7dff88458-n4476 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.72s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-258974 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-258974 -- exec busybox-7dff88458-7xz97 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-258974 -- exec busybox-7dff88458-7xz97 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-258974 -- exec busybox-7dff88458-n4476 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-258974 -- exec busybox-7dff88458-n4476 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-258974 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-258974 -v 3 --alsologtostderr: (27.95437697s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.55s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-258974 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 cp testdata/cp-test.txt multinode-258974:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 cp multinode-258974:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2007731193/001/cp-test_multinode-258974.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 cp multinode-258974:/home/docker/cp-test.txt multinode-258974-m02:/home/docker/cp-test_multinode-258974_multinode-258974-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974-m02 "sudo cat /home/docker/cp-test_multinode-258974_multinode-258974-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 cp multinode-258974:/home/docker/cp-test.txt multinode-258974-m03:/home/docker/cp-test_multinode-258974_multinode-258974-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974-m03 "sudo cat /home/docker/cp-test_multinode-258974_multinode-258974-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 cp testdata/cp-test.txt multinode-258974-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 cp multinode-258974-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2007731193/001/cp-test_multinode-258974-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 cp multinode-258974-m02:/home/docker/cp-test.txt multinode-258974:/home/docker/cp-test_multinode-258974-m02_multinode-258974.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974 "sudo cat /home/docker/cp-test_multinode-258974-m02_multinode-258974.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 cp multinode-258974-m02:/home/docker/cp-test.txt multinode-258974-m03:/home/docker/cp-test_multinode-258974-m02_multinode-258974-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974-m03 "sudo cat /home/docker/cp-test_multinode-258974-m02_multinode-258974-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 cp testdata/cp-test.txt multinode-258974-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 cp multinode-258974-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2007731193/001/cp-test_multinode-258974-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 cp multinode-258974-m03:/home/docker/cp-test.txt multinode-258974:/home/docker/cp-test_multinode-258974-m03_multinode-258974.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974 "sudo cat /home/docker/cp-test_multinode-258974-m03_multinode-258974.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 cp multinode-258974-m03:/home/docker/cp-test.txt multinode-258974-m02:/home/docker/cp-test_multinode-258974-m03_multinode-258974-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 ssh -n multinode-258974-m02 "sudo cat /home/docker/cp-test_multinode-258974-m03_multinode-258974-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-258974 node stop m03: (1.183516055s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-258974 status: exit status 7 (457.720763ms)

                                                
                                                
-- stdout --
	multinode-258974
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-258974-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-258974-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-258974 status --alsologtostderr: exit status 7 (473.325923ms)

                                                
                                                
-- stdout --
	multinode-258974
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-258974-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-258974-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 17:38:29.706879  258503 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:38:29.707006  258503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:38:29.707020  258503 out.go:358] Setting ErrFile to fd 2...
	I1028 17:38:29.707027  258503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:38:29.707248  258503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
	I1028 17:38:29.707433  258503 out.go:352] Setting JSON to false
	I1028 17:38:29.707463  258503 mustload.go:65] Loading cluster: multinode-258974
	I1028 17:38:29.707608  258503 notify.go:220] Checking for updates...
	I1028 17:38:29.707925  258503 config.go:182] Loaded profile config "multinode-258974": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:38:29.707950  258503 status.go:174] checking status of multinode-258974 ...
	I1028 17:38:29.708395  258503 cli_runner.go:164] Run: docker container inspect multinode-258974 --format={{.State.Status}}
	I1028 17:38:29.729207  258503 status.go:371] multinode-258974 host status = "Running" (err=<nil>)
	I1028 17:38:29.729246  258503 host.go:66] Checking if "multinode-258974" exists ...
	I1028 17:38:29.729520  258503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-258974
	I1028 17:38:29.748567  258503 host.go:66] Checking if "multinode-258974" exists ...
	I1028 17:38:29.748839  258503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 17:38:29.748878  258503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-258974
	I1028 17:38:29.769692  258503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/multinode-258974/id_rsa Username:docker}
	I1028 17:38:29.852793  258503 ssh_runner.go:195] Run: systemctl --version
	I1028 17:38:29.856705  258503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:38:29.867257  258503 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 17:38:29.921474  258503 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:63 SystemTime:2024-10-28 17:38:29.907702528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 17:38:29.922079  258503 kubeconfig.go:125] found "multinode-258974" server: "https://192.168.67.2:8443"
	I1028 17:38:29.922108  258503 api_server.go:166] Checking apiserver status ...
	I1028 17:38:29.922142  258503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:38:29.932987  258503 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1479/cgroup
	I1028 17:38:29.942047  258503 api_server.go:182] apiserver freezer: "2:freezer:/docker/5a12421f3dcb630dac44be174b7f93542414af3a5df4ffa74c47f39656908a42/crio/crio-671efe5bf0a92b3910bdcc847c4ca834acb59000a2aafd5e3a8c5de78bbf9ebb"
	I1028 17:38:29.942132  258503 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5a12421f3dcb630dac44be174b7f93542414af3a5df4ffa74c47f39656908a42/crio/crio-671efe5bf0a92b3910bdcc847c4ca834acb59000a2aafd5e3a8c5de78bbf9ebb/freezer.state
	I1028 17:38:29.949961  258503 api_server.go:204] freezer state: "THAWED"
	I1028 17:38:29.949996  258503 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1028 17:38:29.953720  258503 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1028 17:38:29.953747  258503 status.go:463] multinode-258974 apiserver status = Running (err=<nil>)
	I1028 17:38:29.953760  258503 status.go:176] multinode-258974 status: &{Name:multinode-258974 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 17:38:29.953785  258503 status.go:174] checking status of multinode-258974-m02 ...
	I1028 17:38:29.954032  258503 cli_runner.go:164] Run: docker container inspect multinode-258974-m02 --format={{.State.Status}}
	I1028 17:38:29.974725  258503 status.go:371] multinode-258974-m02 host status = "Running" (err=<nil>)
	I1028 17:38:29.974767  258503 host.go:66] Checking if "multinode-258974-m02" exists ...
	I1028 17:38:29.975019  258503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-258974-m02
	I1028 17:38:29.993044  258503 host.go:66] Checking if "multinode-258974-m02" exists ...
	I1028 17:38:29.993304  258503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 17:38:29.993352  258503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-258974-m02
	I1028 17:38:30.011213  258503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19872-102136/.minikube/machines/multinode-258974-m02/id_rsa Username:docker}
	I1028 17:38:30.097852  258503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:38:30.108547  258503 status.go:176] multinode-258974-m02 status: &{Name:multinode-258974-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1028 17:38:30.108592  258503 status.go:174] checking status of multinode-258974-m03 ...
	I1028 17:38:30.108842  258503 cli_runner.go:164] Run: docker container inspect multinode-258974-m03 --format={{.State.Status}}
	I1028 17:38:30.127414  258503 status.go:371] multinode-258974-m03 host status = "Stopped" (err=<nil>)
	I1028 17:38:30.127445  258503 status.go:384] host is not running, skipping remaining checks
	I1028 17:38:30.127454  258503 status.go:176] multinode-258974-m03 status: &{Name:multinode-258974-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-258974 node start m03 -v=7 --alsologtostderr: (8.350313939s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (109.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-258974
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-258974
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-258974: (24.695429613s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-258974 --wait=true -v=8 --alsologtostderr
E1028 17:39:24.351116  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-258974 --wait=true -v=8 --alsologtostderr: (1m24.743974278s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-258974
--- PASS: TestMultiNode/serial/RestartKeepsNodes (109.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-258974 node delete m03: (4.51765154s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 stop
E1028 17:40:37.740516  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:40:47.418671  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-258974 stop: (23.560845689s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-258974 status: exit status 7 (85.331813ms)

                                                
                                                
-- stdout --
	multinode-258974
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-258974-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-258974 status --alsologtostderr: exit status 7 (88.9923ms)

                                                
                                                
-- stdout --
	multinode-258974
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-258974-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 17:40:57.466019  268127 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:40:57.466444  268127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:40:57.466456  268127 out.go:358] Setting ErrFile to fd 2...
	I1028 17:40:57.466461  268127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:40:57.466648  268127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
	I1028 17:40:57.466848  268127 out.go:352] Setting JSON to false
	I1028 17:40:57.466883  268127 mustload.go:65] Loading cluster: multinode-258974
	I1028 17:40:57.467002  268127 notify.go:220] Checking for updates...
	I1028 17:40:57.467960  268127 config.go:182] Loaded profile config "multinode-258974": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:40:57.468044  268127 status.go:174] checking status of multinode-258974 ...
	I1028 17:40:57.469248  268127 cli_runner.go:164] Run: docker container inspect multinode-258974 --format={{.State.Status}}
	I1028 17:40:57.486394  268127 status.go:371] multinode-258974 host status = "Stopped" (err=<nil>)
	I1028 17:40:57.486424  268127 status.go:384] host is not running, skipping remaining checks
	I1028 17:40:57.486434  268127 status.go:176] multinode-258974 status: &{Name:multinode-258974 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 17:40:57.486467  268127 status.go:174] checking status of multinode-258974-m02 ...
	I1028 17:40:57.486877  268127 cli_runner.go:164] Run: docker container inspect multinode-258974-m02 --format={{.State.Status}}
	I1028 17:40:57.505727  268127 status.go:371] multinode-258974-m02 host status = "Stopped" (err=<nil>)
	I1028 17:40:57.505770  268127 status.go:384] host is not running, skipping remaining checks
	I1028 17:40:57.505778  268127 status.go:176] multinode-258974-m02 status: &{Name:multinode-258974-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-258974 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-258974 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (49.242888042s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-258974 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-258974
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-258974-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-258974-m02 --driver=docker  --container-runtime=crio: exit status 14 (70.042813ms)

                                                
                                                
-- stdout --
	* [multinode-258974-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-258974-m02' is duplicated with machine name 'multinode-258974-m02' in profile 'multinode-258974'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-258974-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-258974-m03 --driver=docker  --container-runtime=crio: (21.207417532s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-258974
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-258974: exit status 80 (272.904453ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-258974 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-258974-m03 already exists in multinode-258974-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-258974-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-258974-m03: (1.863787947s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.47s)

                                                
                                    
x
+
TestPreload (116.52s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-977796 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-977796 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m18.466049857s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-977796 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-977796 image pull gcr.io/k8s-minikube/busybox: (3.475518015s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-977796
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-977796: (5.723051684s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-977796 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-977796 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (26.273843839s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-977796 image list
helpers_test.go:175: Cleaning up "test-preload-977796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-977796
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-977796: (2.339966485s)
--- PASS: TestPreload (116.52s)

                                                
                                    
x
+
TestScheduledStopUnix (96.88s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-116499 --memory=2048 --driver=docker  --container-runtime=crio
E1028 17:44:24.353700  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-116499 --memory=2048 --driver=docker  --container-runtime=crio: (20.825789211s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-116499 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-116499 -n scheduled-stop-116499
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-116499 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1028 17:44:32.410378  108914 retry.go:31] will retry after 85.972µs: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
I1028 17:44:32.411558  108914 retry.go:31] will retry after 165.343µs: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
I1028 17:44:32.412701  108914 retry.go:31] will retry after 306.757µs: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
I1028 17:44:32.413849  108914 retry.go:31] will retry after 226.672µs: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
I1028 17:44:32.414977  108914 retry.go:31] will retry after 480.333µs: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
I1028 17:44:32.416106  108914 retry.go:31] will retry after 509.278µs: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
I1028 17:44:32.417261  108914 retry.go:31] will retry after 795.764µs: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
I1028 17:44:32.418395  108914 retry.go:31] will retry after 2.140948ms: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
I1028 17:44:32.421617  108914 retry.go:31] will retry after 3.812079ms: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
I1028 17:44:32.425896  108914 retry.go:31] will retry after 4.5188ms: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
I1028 17:44:32.431163  108914 retry.go:31] will retry after 5.04655ms: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
I1028 17:44:32.436334  108914 retry.go:31] will retry after 12.10563ms: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
I1028 17:44:32.449627  108914 retry.go:31] will retry after 14.684301ms: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
I1028 17:44:32.464897  108914 retry.go:31] will retry after 14.03579ms: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
I1028 17:44:32.479106  108914 retry.go:31] will retry after 20.846746ms: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
I1028 17:44:32.500388  108914 retry.go:31] will retry after 33.432729ms: open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/scheduled-stop-116499/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-116499 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-116499 -n scheduled-stop-116499
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-116499
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-116499 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1028 17:45:37.742803  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-116499
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-116499: exit status 7 (69.138594ms)

                                                
                                                
-- stdout --
	scheduled-stop-116499
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-116499 -n scheduled-stop-116499
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-116499 -n scheduled-stop-116499: exit status 7 (68.694082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-116499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-116499
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-116499: (4.693097116s)
--- PASS: TestScheduledStopUnix (96.88s)

                                                
                                    
x
+
TestInsufficientStorage (9.71s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-485999 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-485999 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.355058133s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"05503534-d531-4f78-acf1-c693db8a6bee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-485999] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"de77a9af-af1b-4582-a807-a9c255fc9fb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19872"}}
	{"specversion":"1.0","id":"ca48c70e-2849-4652-a811-e72bb57c83db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e7ac74f1-f34c-4625-b900-34a6d0c91b46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig"}}
	{"specversion":"1.0","id":"c4f41d70-d2da-4316-9649-c3e2958d1949","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube"}}
	{"specversion":"1.0","id":"285b3812-87c1-41ea-be5e-36bcf8ee45f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f0db9e66-0334-4298-8e3f-5cd62b3721ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ed9a6aeb-e1fa-44ad-b1ba-9a69e41dad25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ab7252ee-a50c-4976-9076-b803b448e98c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"74b74de8-3878-45a0-a3a2-ed666bb32e9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5e156e20-e2c5-4b16-99f0-2ed60d78754f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"096590bd-5bf2-4907-abea-c67c54a7ca0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-485999\" primary control-plane node in \"insufficient-storage-485999\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b722c6c0-b0c5-4849-8cc1-e64f8bf1bf39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1730110049-19872 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba6ce372-18bb-4a03-924b-22e3a5f74d2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a4b5bec2-7130-400d-a03b-7344e19589ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-485999 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-485999 --output=json --layout=cluster: exit status 7 (267.904233ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-485999","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-485999","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 17:45:55.666559  290501 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-485999" does not appear in /home/jenkins/minikube-integration/19872-102136/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-485999 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-485999 --output=json --layout=cluster: exit status 7 (262.748359ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-485999","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-485999","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 17:45:55.929815  290600 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-485999" does not appear in /home/jenkins/minikube-integration/19872-102136/kubeconfig
	E1028 17:45:55.940124  290600 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/insufficient-storage-485999/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-485999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-485999
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-485999: (1.820225724s)
--- PASS: TestInsufficientStorage (9.71s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (140.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1732569558 start -p running-upgrade-060760 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1732569558 start -p running-upgrade-060760 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m36.576772823s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-060760 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-060760 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.894743186s)
helpers_test.go:175: Cleaning up "running-upgrade-060760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-060760
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-060760: (4.738123334s)
--- PASS: TestRunningBinaryUpgrade (140.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (333.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-472312 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-472312 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.606428319s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-472312
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-472312: (1.211938134s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-472312 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-472312 status --format={{.Host}}: exit status 7 (71.027701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-472312 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-472312 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.823090541s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-472312 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-472312 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-472312 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (76.015806ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-472312] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-472312
	    minikube start -p kubernetes-upgrade-472312 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4723122 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-472312 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-472312 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1028 17:53:40.807943  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-472312 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.432834556s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-472312" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-472312
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-472312: (2.174557774s)
--- PASS: TestKubernetesUpgrade (333.46s)

                                                
                                    
x
+
TestMissingContainerUpgrade (94.1s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2084180043 start -p missing-upgrade-050968 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2084180043 start -p missing-upgrade-050968 --memory=2200 --driver=docker  --container-runtime=crio: (25.991917096s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-050968
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-050968: (13.044980937s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-050968
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-050968 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-050968 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.155809899s)
helpers_test.go:175: Cleaning up "missing-upgrade-050968" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-050968
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-050968: (4.574236486s)
--- PASS: TestMissingContainerUpgrade (94.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-608869 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-608869 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (82.238422ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-608869] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-608869 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-608869 --driver=docker  --container-runtime=crio: (25.951916832s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-608869 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (127.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3978440170 start -p stopped-upgrade-645366 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3978440170 start -p stopped-upgrade-645366 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m36.106533363s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3978440170 -p stopped-upgrade-645366 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3978440170 -p stopped-upgrade-645366 stop: (2.529134093s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-645366 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-645366 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.913626431s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (127.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-608869 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-608869 --no-kubernetes --driver=docker  --container-runtime=crio: (6.868933034s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-608869 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-608869 status -o json: exit status 2 (349.762529ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-608869","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-608869
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-608869: (2.229740263s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-608869 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-608869 --no-kubernetes --driver=docker  --container-runtime=crio: (11.435539922s)
--- PASS: TestNoKubernetes/serial/Start (11.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-608869 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-608869 "sudo systemctl is-active --quiet service kubelet": exit status 1 (235.72816ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.313847997s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-608869
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-608869: (1.598941221s)
--- PASS: TestNoKubernetes/serial/Stop (1.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-608869 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-608869 --driver=docker  --container-runtime=crio: (8.17057734s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-608869 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-608869 "sudo systemctl is-active --quiet service kubelet": exit status 1 (259.089955ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-779252 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-779252 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (247.910205ms)

                                                
                                                
-- stdout --
	* [false-779252] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 17:47:02.739586  306135 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:47:02.739718  306135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:47:02.739729  306135 out.go:358] Setting ErrFile to fd 2...
	I1028 17:47:02.739736  306135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:47:02.739982  306135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-102136/.minikube/bin
	I1028 17:47:02.740608  306135 out.go:352] Setting JSON to false
	I1028 17:47:02.741640  306135 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5364,"bootTime":1730132259,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:47:02.741719  306135 start.go:139] virtualization: kvm guest
	I1028 17:47:02.743489  306135 out.go:177] * [false-779252] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:47:02.745456  306135 notify.go:220] Checking for updates...
	I1028 17:47:02.745462  306135 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:47:02.747077  306135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:47:02.749191  306135 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-102136/kubeconfig
	I1028 17:47:02.750894  306135 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-102136/.minikube
	I1028 17:47:02.752562  306135 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:47:02.754233  306135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:47:02.756235  306135 config.go:182] Loaded profile config "force-systemd-env-910504": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:47:02.756385  306135 config.go:182] Loaded profile config "running-upgrade-060760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1028 17:47:02.756498  306135 config.go:182] Loaded profile config "stopped-upgrade-645366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1028 17:47:02.756661  306135 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:47:02.782949  306135 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 17:47:02.783088  306135 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 17:47:02.878468  306135 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:58 SystemTime:2024-10-28 17:47:02.859676555 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 17:47:02.878614  306135 docker.go:318] overlay module found
	I1028 17:47:02.881188  306135 out.go:177] * Using the docker driver based on user configuration
	I1028 17:47:02.882678  306135 start.go:297] selected driver: docker
	I1028 17:47:02.882698  306135 start.go:901] validating driver "docker" against <nil>
	I1028 17:47:02.882711  306135 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:47:02.929448  306135 out.go:201] 
	W1028 17:47:02.931304  306135 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1028 17:47:02.933116  306135 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-779252 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-779252

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-779252

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-779252

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-779252

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-779252

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-779252

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-779252

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-779252

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-779252

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-779252

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-779252

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-779252" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-779252" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-779252

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779252"

                                                
                                                
----------------------- debugLogs end: false-779252 [took: 3.604281484s] --------------------------------
helpers_test.go:175: Cleaning up "false-779252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-779252
--- PASS: TestNetworkPlugins/group/false (4.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-645366
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestPause/serial/Start (44.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-986385 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-986385 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (44.974394084s)
--- PASS: TestPause/serial/Start (44.97s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.75s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-986385 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-986385 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.735271423s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.75s)

                                                
                                    
x
+
TestPause/serial/Pause (1.02s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-986385 --alsologtostderr -v=5
E1028 17:49:24.350831  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-986385 --alsologtostderr -v=5: (1.024771145s)
--- PASS: TestPause/serial/Pause (1.02s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-986385 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-986385 --output=json --layout=cluster: exit status 2 (302.459705ms)

                                                
                                                
-- stdout --
	{"Name":"pause-986385","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-986385","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-986385 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.79s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-986385 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.79s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.67s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-986385 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-986385 --alsologtostderr -v=5: (2.67383809s)
--- PASS: TestPause/serial/DeletePaused (2.67s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (22.1s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (22.043246008s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-986385
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-986385: exit status 1 (17.589115ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-986385: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (22.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (135.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-955756 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-955756 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m15.911643865s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (135.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (48.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-782508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1028 17:50:37.741265  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-782508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (48.505053623s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (48.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-782508 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [688c5738-e1d4-43d1-b55c-5e77b448a1e1] Pending
helpers_test.go:344: "busybox" [688c5738-e1d4-43d1-b55c-5e77b448a1e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [688c5738-e1d4-43d1-b55c-5e77b448a1e1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003866486s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-782508 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-782508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-782508 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-782508 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-782508 --alsologtostderr -v=3: (12.28760798s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-997866 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-997866 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (56.754616615s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-782508 -n embed-certs-782508
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-782508 -n embed-certs-782508: exit status 7 (71.101345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-782508 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (264.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-782508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-782508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m23.846518548s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-782508 -n embed-certs-782508
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (264.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-997866 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0b5febd4-3ee4-4b73-81b2-4e2136e04489] Pending
helpers_test.go:344: "busybox" [0b5febd4-3ee4-4b73-81b2-4e2136e04489] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0b5febd4-3ee4-4b73-81b2-4e2136e04489] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004271811s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-997866 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-997866 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-997866 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-955756 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4c99efe7-af08-4fae-a8c7-ca450b17404f] Pending
helpers_test.go:344: "busybox" [4c99efe7-af08-4fae-a8c7-ca450b17404f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4c99efe7-af08-4fae-a8c7-ca450b17404f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003477717s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-955756 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-997866 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-997866 --alsologtostderr -v=3: (11.861802994s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-955756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-955756 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-955756 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-955756 --alsologtostderr -v=3: (11.953358437s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997866 -n no-preload-997866
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997866 -n no-preload-997866: exit status 7 (70.207753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-997866 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-997866 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-997866 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m21.90360988s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997866 -n no-preload-997866
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-955756 -n old-k8s-version-955756
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-955756 -n old-k8s-version-955756: exit status 7 (98.020841ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-955756 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (147.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-955756 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-955756 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m27.14103896s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-955756 -n old-k8s-version-955756
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (147.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-552171 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1028 17:54:24.351134  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-552171 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (39.04542878s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-552171 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [67ed9174-8fc9-40ab-944d-2e22e9a7662a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [67ed9174-8fc9-40ab-944d-2e22e9a7662a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003919734s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-552171 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-552171 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-552171 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-552171 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-552171 --alsologtostderr -v=3: (11.859932586s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-q4fdm" [ee7a82a2-5b48-4148-872b-ac2bfdbb5471] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00388604s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-552171 -n default-k8s-diff-port-552171
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-552171 -n default-k8s-diff-port-552171: exit status 7 (75.691095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-552171 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-552171 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-552171 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m22.258216975s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-552171 -n default-k8s-diff-port-552171
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-q4fdm" [ee7a82a2-5b48-4148-872b-ac2bfdbb5471] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004132631s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-955756 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-955756 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-955756 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-955756 -n old-k8s-version-955756
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-955756 -n old-k8s-version-955756: exit status 2 (313.722417ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-955756 -n old-k8s-version-955756
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-955756 -n old-k8s-version-955756: exit status 2 (324.461003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-955756 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-955756 -n old-k8s-version-955756
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-955756 -n old-k8s-version-955756
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-159846 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-159846 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (31.594636271s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-25pbk" [0679b111-20d7-4274-9f4c-70fe24383d62] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004816956s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-25pbk" [0679b111-20d7-4274-9f4c-70fe24383d62] Running
E1028 17:55:37.740812  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/addons-803184/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004520752s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-782508 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-782508 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-782508 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-782508 -n embed-certs-782508
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-782508 -n embed-certs-782508: exit status 2 (320.480499ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-782508 -n embed-certs-782508
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-782508 -n embed-certs-782508: exit status 2 (327.482051ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-782508 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-782508 -n embed-certs-782508
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-782508 -n embed-certs-782508
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (38.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-779252 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-779252 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (38.370038613s)
--- PASS: TestNetworkPlugins/group/auto/Start (38.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-159846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-159846 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-159846 --alsologtostderr -v=3: (3.080295153s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-159846 -n newest-cni-159846
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-159846 -n newest-cni-159846: exit status 7 (77.461313ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-159846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-159846 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-159846 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (13.372021034s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-159846 -n newest-cni-159846
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-159846 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-159846 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-159846 -n newest-cni-159846
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-159846 -n newest-cni-159846: exit status 2 (304.404756ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-159846 -n newest-cni-159846
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-159846 -n newest-cni-159846: exit status 2 (299.624689ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-159846 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-159846 -n newest-cni-159846
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-159846 -n newest-cni-159846
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (39.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-779252 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-779252 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (39.513030067s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (39.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-779252 "pgrep -a kubelet"
I1028 17:56:23.814079  108914 config.go:182] Loaded profile config "auto-779252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-779252 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nb6v8" [19a971d3-5ca9-43d4-be3d-9a54cb613ac2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nb6v8" [19a971d3-5ca9-43d4-be3d-9a54cb613ac2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004241985s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-779252 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-779252 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-779252 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6wnr5" [5de2a03c-05e2-49dc-9a68-6125bb01e9b1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004424108s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6wnr5" [5de2a03c-05e2-49dc-9a68-6125bb01e9b1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004725172s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-997866 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6pkdh" [f0fa0fab-d8e4-494e-8729-856e23d6158b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005185954s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-779252 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-779252 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m0.999356332s)
--- PASS: TestNetworkPlugins/group/calico/Start (61.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-997866 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-997866 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-997866 -n no-preload-997866
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-997866 -n no-preload-997866: exit status 2 (314.017782ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-997866 -n no-preload-997866
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-997866 -n no-preload-997866: exit status 2 (308.673207ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-997866 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-997866 -n no-preload-997866
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-997866 -n no-preload-997866
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-779252 "pgrep -a kubelet"
I1028 17:56:56.393303  108914 config.go:182] Loaded profile config "kindnet-779252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-779252 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tv6lt" [7fcf6d35-587e-4942-8619-d462262b8e74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tv6lt" [7fcf6d35-587e-4942-8619-d462262b8e74] Running
E1028 17:57:07.896604  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/old-k8s-version-955756/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:57:07.903052  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/old-k8s-version-955756/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004265737s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-779252 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-779252 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (50.715108707s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-779252 exec deployment/netcat -- nslookup kubernetes.default
E1028 17:57:07.915038  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/old-k8s-version-955756/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:57:07.936958  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/old-k8s-version-955756/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:57:07.978442  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/old-k8s-version-955756/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-779252 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1028 17:57:08.060188  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/old-k8s-version-955756/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-779252 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1028 17:57:08.221687  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/old-k8s-version-955756/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (37.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-779252 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1028 17:57:48.874537  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/old-k8s-version-955756/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-779252 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (37.297228711s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (37.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-779252 "pgrep -a kubelet"
I1028 17:57:51.082962  108914 config.go:182] Loaded profile config "custom-flannel-779252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-779252 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fdpc5" [98a30743-bdab-4bc7-b4ab-2e8d6fd31719] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fdpc5" [98a30743-bdab-4bc7-b4ab-2e8d6fd31719] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004732889s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ks4s2" [864ac11e-cb03-49a9-8ed5-913ca0d6eac2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00500817s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-779252 "pgrep -a kubelet"
I1028 17:57:59.535383  108914 config.go:182] Loaded profile config "calico-779252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-779252 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-g979v" [129d31d6-f3b8-4f7a-9078-17807d19b09f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-g979v" [129d31d6-f3b8-4f7a-9078-17807d19b09f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004880285s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-779252 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-779252 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-779252 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-779252 "pgrep -a kubelet"
I1028 17:58:06.238583  108914 config.go:182] Loaded profile config "enable-default-cni-779252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-779252 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2nz5g" [f0d7b914-40d0-4fb5-9624-a3792946ee5b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2nz5g" [f0d7b914-40d0-4fb5-9624-a3792946ee5b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005182213s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-779252 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-779252 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-779252 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-779252 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-779252 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-779252 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-779252 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-779252 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (51.145624646s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (42.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-779252 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-779252 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (42.233482231s)
--- PASS: TestNetworkPlugins/group/bridge/Start (42.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-v9n65" [aeefa1bf-dba5-40cc-a464-e76826ba6a03] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004649122s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-779252 "pgrep -a kubelet"
I1028 17:59:13.717220  108914 config.go:182] Loaded profile config "bridge-779252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-779252 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c5jvl" [c97994a3-9126-4e4d-847e-87baf6c2b774] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c5jvl" [c97994a3-9126-4e4d-847e-87baf6c2b774] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003981104s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-779252 "pgrep -a kubelet"
I1028 17:59:17.713923  108914 config.go:182] Loaded profile config "flannel-779252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-779252 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ftdgb" [ae438244-6a33-4e8b-89f4-d3ae7466a426] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ftdgb" [ae438244-6a33-4e8b-89f4-d3ae7466a426] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004456566s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vp5zg" [5cace795-ac7e-42d7-a1a3-81f6e7f4b9b5] Running
E1028 17:59:24.351423  108914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/functional-301254/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004087458s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-779252 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-779252 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-779252 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-779252 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-779252 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-779252 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vp5zg" [5cace795-ac7e-42d7-a1a3-81f6e7f4b9b5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004943725s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-552171 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-552171 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-552171 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-552171 -n default-k8s-diff-port-552171
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-552171 -n default-k8s-diff-port-552171: exit status 2 (357.138595ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-552171 -n default-k8s-diff-port-552171
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-552171 -n default-k8s-diff-port-552171: exit status 2 (349.668734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-552171 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-552171 -n default-k8s-diff-port-552171
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-552171 -n default-k8s-diff-port-552171
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)

                                                
                                    

Test skip (26/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-803184 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-924453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-924453
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-779252 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-779252

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-779252

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-779252

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-779252

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-779252

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-779252

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-779252

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-779252

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-779252

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-779252

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-779252

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-779252" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-779252" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-779252

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779252"

                                                
                                                
----------------------- debugLogs end: kubenet-779252 [took: 2.967419369s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-779252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-779252
--- SKIP: TestNetworkPlugins/group/kubenet (3.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (8.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-779252 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-779252" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-779252" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-779252" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19872-102136/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 17:47:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-env-910504
contexts:
- context:
cluster: force-systemd-env-910504
extensions:
- extension:
last-update: Mon, 28 Oct 2024 17:47:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: force-systemd-env-910504
name: force-systemd-env-910504
current-context: force-systemd-env-910504
kind: Config
preferences: {}
users:
- name: force-systemd-env-910504
user:
client-certificate: /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/force-systemd-env-910504/client.crt
client-key: /home/jenkins/minikube-integration/19872-102136/.minikube/profiles/force-systemd-env-910504/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-779252

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-779252" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779252"

                                                
                                                
----------------------- debugLogs end: cilium-779252 [took: 7.650485992s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-779252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-779252
--- SKIP: TestNetworkPlugins/group/cilium (8.04s)

                                                
                                    
Copied to clipboard