Test Report: Docker_Linux_crio 19876

                    
                      0db15b506654906b6081fade5258c34c52419f7c:2024-10-28:36841
                    
                

Test fail (2/330)

Order failed test Duration
36 TestAddons/parallel/Ingress 150.41
38 TestAddons/parallel/MetricsServer 361.92
x
+
TestAddons/parallel/Ingress (150.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-673472 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-673472 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-673472 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f093ca4a-2985-4bfa-8afe-6aee5e9c8b9c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f093ca4a-2985-4bfa-8afe-6aee5e9c8b9c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.004051459s
I1028 11:03:43.018947  541347 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-673472 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.412666258s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-673472 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-673472
helpers_test.go:235: (dbg) docker inspect addons-673472:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e8b924fc64073dc02f22bcf1007e26515b922633e83268091dafc650be83a735",
	        "Created": "2024-10-28T10:59:43.022756242Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 543375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-28T10:59:43.16228893Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:05bcd996665116a573f1bc98d7e2b0a5da287feef26d621bbd294f87ee72c630",
	        "ResolvConfPath": "/var/lib/docker/containers/e8b924fc64073dc02f22bcf1007e26515b922633e83268091dafc650be83a735/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e8b924fc64073dc02f22bcf1007e26515b922633e83268091dafc650be83a735/hostname",
	        "HostsPath": "/var/lib/docker/containers/e8b924fc64073dc02f22bcf1007e26515b922633e83268091dafc650be83a735/hosts",
	        "LogPath": "/var/lib/docker/containers/e8b924fc64073dc02f22bcf1007e26515b922633e83268091dafc650be83a735/e8b924fc64073dc02f22bcf1007e26515b922633e83268091dafc650be83a735-json.log",
	        "Name": "/addons-673472",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-673472:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-673472",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/aad7895549e0b7ea085ca2af3f11c087fac6bf570ad2dc4bd73feee2d5f93b18-init/diff:/var/lib/docker/overlay2/d473489c45702c25b9c588a4584ae1c4861c78e651ffd702dd9d50699009da5c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aad7895549e0b7ea085ca2af3f11c087fac6bf570ad2dc4bd73feee2d5f93b18/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aad7895549e0b7ea085ca2af3f11c087fac6bf570ad2dc4bd73feee2d5f93b18/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aad7895549e0b7ea085ca2af3f11c087fac6bf570ad2dc4bd73feee2d5f93b18/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-673472",
	                "Source": "/var/lib/docker/volumes/addons-673472/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-673472",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-673472",
	                "name.minikube.sigs.k8s.io": "addons-673472",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "90a6408866ae52c01dd153915206bcb1bf2a1623cf9e2dcfbd16c2fec6a503ea",
	            "SandboxKey": "/var/run/docker/netns/90a6408866ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-673472": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "60819e8d84bc026fde010beabfc4e4d1bdc6f9809a10a5a0a7b142a5bfb4baef",
	                    "EndpointID": "089bb0d5dfdce7b823abf8a94ca5ede0fd01eb42485fbeb601f7c10fec1b43d3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-673472",
	                        "e8b924fc6407"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-673472 -n addons-673472
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-673472 logs -n 25: (1.242828249s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-337919                                                                     | download-only-337919   | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC | 28 Oct 24 10:59 UTC |
	| start   | --download-only -p                                                                          | download-docker-026471 | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC |                     |
	|         | download-docker-026471                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-026471                                                                   | download-docker-026471 | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC | 28 Oct 24 10:59 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-985535   | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC |                     |
	|         | binary-mirror-985535                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40257                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-985535                                                                     | binary-mirror-985535   | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC | 28 Oct 24 10:59 UTC |
	| addons  | enable dashboard -p                                                                         | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC |                     |
	|         | addons-673472                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC |                     |
	|         | addons-673472                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-673472 --wait=true                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC | 28 Oct 24 11:02 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:02 UTC | 28 Oct 24 11:02 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:02 UTC | 28 Oct 24 11:03 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | -p addons-673472                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-673472 addons                                                                        | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-673472 addons                                                                        | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-673472 ssh cat                                                                       | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | /opt/local-path-provisioner/pvc-b5123f9c-13e2-4f3b-9621-6a638e949257_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-673472 ip                                                                            | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-673472 addons                                                                        | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-673472 ssh curl -s                                                                   | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-673472 addons                                                                        | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-673472 addons                                                                        | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:04 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-673472 ip                                                                            | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:05 UTC | 28 Oct 24 11:05 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 10:59:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 10:59:20.754511  542642 out.go:345] Setting OutFile to fd 1 ...
	I1028 10:59:20.754981  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:59:20.755000  542642 out.go:358] Setting ErrFile to fd 2...
	I1028 10:59:20.755013  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:59:20.755499  542642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
	I1028 10:59:20.756452  542642 out.go:352] Setting JSON to false
	I1028 10:59:20.757469  542642 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9705,"bootTime":1730103456,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 10:59:20.757580  542642 start.go:139] virtualization: kvm guest
	I1028 10:59:20.759710  542642 out.go:177] * [addons-673472] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 10:59:20.761657  542642 notify.go:220] Checking for updates...
	I1028 10:59:20.761693  542642 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 10:59:20.763215  542642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 10:59:20.764504  542642 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-533928/kubeconfig
	I1028 10:59:20.765866  542642 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-533928/.minikube
	I1028 10:59:20.767378  542642 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 10:59:20.768819  542642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 10:59:20.770226  542642 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:59:20.793533  542642 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 10:59:20.793626  542642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:59:20.840792  542642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-28 10:59:20.831923049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 10:59:20.840909  542642 docker.go:318] overlay module found
	I1028 10:59:20.842859  542642 out.go:177] * Using the docker driver based on user configuration
	I1028 10:59:20.844253  542642 start.go:297] selected driver: docker
	I1028 10:59:20.844275  542642 start.go:901] validating driver "docker" against <nil>
	I1028 10:59:20.844289  542642 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 10:59:20.845157  542642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:59:20.892051  542642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-28 10:59:20.882465964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 10:59:20.892234  542642 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 10:59:20.892485  542642 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 10:59:20.894462  542642 out.go:177] * Using Docker driver with root privileges
	I1028 10:59:20.895759  542642 cni.go:84] Creating CNI manager for ""
	I1028 10:59:20.895851  542642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 10:59:20.895867  542642 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 10:59:20.895958  542642 start.go:340] cluster config:
	{Name:addons-673472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-673472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:59:20.897432  542642 out.go:177] * Starting "addons-673472" primary control-plane node in "addons-673472" cluster
	I1028 10:59:20.898778  542642 cache.go:121] Beginning downloading kic base image for docker with crio
	I1028 10:59:20.900132  542642 out.go:177] * Pulling base image v0.0.45-1729876044-19868 ...
	I1028 10:59:20.901490  542642 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 10:59:20.901546  542642 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-533928/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 10:59:20.901561  542642 cache.go:56] Caching tarball of preloaded images
	I1028 10:59:20.901597  542642 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1028 10:59:20.901673  542642 preload.go:172] Found /home/jenkins/minikube-integration/19876-533928/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 10:59:20.901689  542642 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 10:59:20.902131  542642 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/config.json ...
	I1028 10:59:20.902161  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/config.json: {Name:mk4756f90b022f398c58cfd7f5b361a437b707b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:20.917908  542642 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e to local cache
	I1028 10:59:20.918047  542642 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local cache directory
	I1028 10:59:20.918065  542642 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local cache directory, skipping pull
	I1028 10:59:20.918070  542642 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in cache, skipping pull
	I1028 10:59:20.918078  542642 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e as a tarball
	I1028 10:59:20.918085  542642 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e from local cache
	I1028 10:59:33.015225  542642 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e from cached tarball
	I1028 10:59:33.015277  542642 cache.go:194] Successfully downloaded all kic artifacts
	I1028 10:59:33.015322  542642 start.go:360] acquireMachinesLock for addons-673472: {Name:mkc162c1f445a325af5ddcd3a485171b8916426b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 10:59:33.015518  542642 start.go:364] duration metric: took 127.084µs to acquireMachinesLock for "addons-673472"
	I1028 10:59:33.015572  542642 start.go:93] Provisioning new machine with config: &{Name:addons-673472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-673472 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 10:59:33.015714  542642 start.go:125] createHost starting for "" (driver="docker")
	I1028 10:59:33.018654  542642 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1028 10:59:33.018936  542642 start.go:159] libmachine.API.Create for "addons-673472" (driver="docker")
	I1028 10:59:33.018979  542642 client.go:168] LocalClient.Create starting
	I1028 10:59:33.019097  542642 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca.pem
	I1028 10:59:33.186527  542642 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/cert.pem
	I1028 10:59:33.313958  542642 cli_runner.go:164] Run: docker network inspect addons-673472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1028 10:59:33.330343  542642 cli_runner.go:211] docker network inspect addons-673472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1028 10:59:33.330422  542642 network_create.go:284] running [docker network inspect addons-673472] to gather additional debugging logs...
	I1028 10:59:33.330443  542642 cli_runner.go:164] Run: docker network inspect addons-673472
	W1028 10:59:33.346911  542642 cli_runner.go:211] docker network inspect addons-673472 returned with exit code 1
	I1028 10:59:33.346949  542642 network_create.go:287] error running [docker network inspect addons-673472]: docker network inspect addons-673472: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-673472 not found
	I1028 10:59:33.346963  542642 network_create.go:289] output of [docker network inspect addons-673472]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-673472 not found
	
	** /stderr **
	I1028 10:59:33.347126  542642 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1028 10:59:33.364522  542642 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c81d10}
	I1028 10:59:33.364576  542642 network_create.go:124] attempt to create docker network addons-673472 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1028 10:59:33.364624  542642 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-673472 addons-673472
	I1028 10:59:33.431919  542642 network_create.go:108] docker network addons-673472 192.168.49.0/24 created
	I1028 10:59:33.431967  542642 kic.go:121] calculated static IP "192.168.49.2" for the "addons-673472" container
	I1028 10:59:33.432037  542642 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1028 10:59:33.447921  542642 cli_runner.go:164] Run: docker volume create addons-673472 --label name.minikube.sigs.k8s.io=addons-673472 --label created_by.minikube.sigs.k8s.io=true
	I1028 10:59:33.466308  542642 oci.go:103] Successfully created a docker volume addons-673472
	I1028 10:59:33.466394  542642 cli_runner.go:164] Run: docker run --rm --name addons-673472-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-673472 --entrypoint /usr/bin/test -v addons-673472:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -d /var/lib
	I1028 10:59:38.431407  542642 cli_runner.go:217] Completed: docker run --rm --name addons-673472-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-673472 --entrypoint /usr/bin/test -v addons-673472:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -d /var/lib: (4.964952883s)
	I1028 10:59:38.431451  542642 oci.go:107] Successfully prepared a docker volume addons-673472
	I1028 10:59:38.431498  542642 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 10:59:38.431554  542642 kic.go:194] Starting extracting preloaded images to volume ...
	I1028 10:59:38.431642  542642 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19876-533928/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-673472:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -I lz4 -xf /preloaded.tar -C /extractDir
	I1028 10:59:42.953597  542642 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19876-533928/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-673472:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -I lz4 -xf /preloaded.tar -C /extractDir: (4.521892283s)
	I1028 10:59:42.953640  542642 kic.go:203] duration metric: took 4.522081875s to extract preloaded images to volume ...
	W1028 10:59:42.953810  542642 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1028 10:59:42.953932  542642 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1028 10:59:43.007276  542642 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-673472 --name addons-673472 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-673472 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-673472 --network addons-673472 --ip 192.168.49.2 --volume addons-673472:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e
	I1028 10:59:43.338724  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Running}}
	I1028 10:59:43.356046  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 10:59:43.375169  542642 cli_runner.go:164] Run: docker exec addons-673472 stat /var/lib/dpkg/alternatives/iptables
	I1028 10:59:43.421376  542642 oci.go:144] the created container "addons-673472" has a running status.
	I1028 10:59:43.421439  542642 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa...
	I1028 10:59:43.483588  542642 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1028 10:59:43.506488  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 10:59:43.524511  542642 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1028 10:59:43.524538  542642 kic_runner.go:114] Args: [docker exec --privileged addons-673472 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1028 10:59:43.569953  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 10:59:43.588196  542642 machine.go:93] provisionDockerMachine start ...
	I1028 10:59:43.588337  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:43.607212  542642 main.go:141] libmachine: Using SSH client type: native
	I1028 10:59:43.607468  542642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 10:59:43.607480  542642 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 10:59:43.608322  542642 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42546->127.0.0.1:32768: read: connection reset by peer
	I1028 10:59:46.732613  542642 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-673472
	
	I1028 10:59:46.732653  542642 ubuntu.go:169] provisioning hostname "addons-673472"
	I1028 10:59:46.732722  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:46.749834  542642 main.go:141] libmachine: Using SSH client type: native
	I1028 10:59:46.750061  542642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 10:59:46.750078  542642 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-673472 && echo "addons-673472" | sudo tee /etc/hostname
	I1028 10:59:46.880475  542642 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-673472
	
	I1028 10:59:46.880595  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:46.897706  542642 main.go:141] libmachine: Using SSH client type: native
	I1028 10:59:46.897934  542642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 10:59:46.897960  542642 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-673472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-673472/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-673472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 10:59:47.017226  542642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 10:59:47.017263  542642 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19876-533928/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-533928/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-533928/.minikube}
	I1028 10:59:47.017304  542642 ubuntu.go:177] setting up certificates
	I1028 10:59:47.017323  542642 provision.go:84] configureAuth start
	I1028 10:59:47.017383  542642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-673472
	I1028 10:59:47.034526  542642 provision.go:143] copyHostCerts
	I1028 10:59:47.034628  542642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-533928/.minikube/ca.pem (1078 bytes)
	I1028 10:59:47.034799  542642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-533928/.minikube/cert.pem (1123 bytes)
	I1028 10:59:47.034871  542642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-533928/.minikube/key.pem (1675 bytes)
	I1028 10:59:47.034926  542642 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-533928/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca-key.pem org=jenkins.addons-673472 san=[127.0.0.1 192.168.49.2 addons-673472 localhost minikube]
	I1028 10:59:47.320208  542642 provision.go:177] copyRemoteCerts
	I1028 10:59:47.320278  542642 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 10:59:47.320318  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:47.337824  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 10:59:47.425816  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 10:59:47.449530  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 10:59:47.471356  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 10:59:47.492773  542642 provision.go:87] duration metric: took 475.429444ms to configureAuth
	I1028 10:59:47.492806  542642 ubuntu.go:193] setting minikube options for container-runtime
	I1028 10:59:47.492993  542642 config.go:182] Loaded profile config "addons-673472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 10:59:47.493127  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:47.509958  542642 main.go:141] libmachine: Using SSH client type: native
	I1028 10:59:47.510147  542642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 10:59:47.510164  542642 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 10:59:47.723865  542642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 10:59:47.723905  542642 machine.go:96] duration metric: took 4.135668383s to provisionDockerMachine
	I1028 10:59:47.723923  542642 client.go:171] duration metric: took 14.70493314s to LocalClient.Create
	I1028 10:59:47.723952  542642 start.go:167] duration metric: took 14.705016732s to libmachine.API.Create "addons-673472"
	I1028 10:59:47.723965  542642 start.go:293] postStartSetup for "addons-673472" (driver="docker")
	I1028 10:59:47.723981  542642 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 10:59:47.724056  542642 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 10:59:47.724109  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:47.740986  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 10:59:47.830267  542642 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 10:59:47.833684  542642 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1028 10:59:47.833719  542642 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1028 10:59:47.833727  542642 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1028 10:59:47.833733  542642 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1028 10:59:47.833747  542642 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-533928/.minikube/addons for local assets ...
	I1028 10:59:47.833821  542642 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-533928/.minikube/files for local assets ...
	I1028 10:59:47.833859  542642 start.go:296] duration metric: took 109.886039ms for postStartSetup
	I1028 10:59:47.834224  542642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-673472
	I1028 10:59:47.852268  542642 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/config.json ...
	I1028 10:59:47.852581  542642 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 10:59:47.852646  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:47.870677  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 10:59:47.957879  542642 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1028 10:59:47.962634  542642 start.go:128] duration metric: took 14.94690152s to createHost
	I1028 10:59:47.962666  542642 start.go:83] releasing machines lock for "addons-673472", held for 14.947124395s
	I1028 10:59:47.962793  542642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-673472
	I1028 10:59:47.980075  542642 ssh_runner.go:195] Run: cat /version.json
	I1028 10:59:47.980150  542642 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 10:59:47.980166  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:47.980227  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:47.998345  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 10:59:47.998613  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 10:59:48.080597  542642 ssh_runner.go:195] Run: systemctl --version
	I1028 10:59:48.151991  542642 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 10:59:48.291683  542642 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 10:59:48.296533  542642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 10:59:48.315192  542642 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1028 10:59:48.315278  542642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 10:59:48.343139  542642 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1028 10:59:48.343171  542642 start.go:495] detecting cgroup driver to use...
	I1028 10:59:48.343208  542642 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1028 10:59:48.343261  542642 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 10:59:48.357854  542642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 10:59:48.368073  542642 docker.go:217] disabling cri-docker service (if available) ...
	I1028 10:59:48.368138  542642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 10:59:48.379997  542642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 10:59:48.393039  542642 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 10:59:48.469904  542642 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 10:59:48.550254  542642 docker.go:233] disabling docker service ...
	I1028 10:59:48.550324  542642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 10:59:48.569270  542642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 10:59:48.580866  542642 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 10:59:48.661689  542642 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 10:59:48.749585  542642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 10:59:48.760408  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 10:59:48.775824  542642 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 10:59:48.775879  542642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:59:48.785170  542642 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 10:59:48.785244  542642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:59:48.794496  542642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:59:48.803650  542642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:59:48.813248  542642 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 10:59:48.822440  542642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:59:48.832021  542642 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:59:48.846811  542642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:59:48.856333  542642 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 10:59:48.864921  542642 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 10:59:48.864980  542642 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 10:59:48.879815  542642 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 10:59:48.888314  542642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 10:59:48.963756  542642 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 10:59:49.067640  542642 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 10:59:49.067723  542642 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 10:59:49.071463  542642 start.go:563] Will wait 60s for crictl version
	I1028 10:59:49.071526  542642 ssh_runner.go:195] Run: which crictl
	I1028 10:59:49.075058  542642 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 10:59:49.108980  542642 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1028 10:59:49.109088  542642 ssh_runner.go:195] Run: crio --version
	I1028 10:59:49.147212  542642 ssh_runner.go:195] Run: crio --version
	I1028 10:59:49.184361  542642 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1028 10:59:49.186086  542642 cli_runner.go:164] Run: docker network inspect addons-673472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1028 10:59:49.202472  542642 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1028 10:59:49.206394  542642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 10:59:49.217166  542642 kubeadm.go:883] updating cluster {Name:addons-673472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-673472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 10:59:49.217311  542642 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 10:59:49.217364  542642 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 10:59:49.285638  542642 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 10:59:49.285663  542642 crio.go:433] Images already preloaded, skipping extraction
	I1028 10:59:49.285714  542642 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 10:59:49.320653  542642 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 10:59:49.320679  542642 cache_images.go:84] Images are preloaded, skipping loading
	I1028 10:59:49.320687  542642 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1028 10:59:49.320815  542642 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-673472 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-673472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 10:59:49.320881  542642 ssh_runner.go:195] Run: crio config
	I1028 10:59:49.366384  542642 cni.go:84] Creating CNI manager for ""
	I1028 10:59:49.366406  542642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 10:59:49.366418  542642 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 10:59:49.366441  542642 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-673472 NodeName:addons-673472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 10:59:49.366567  542642 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-673472"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 10:59:49.366629  542642 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 10:59:49.375496  542642 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 10:59:49.375568  542642 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 10:59:49.384261  542642 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1028 10:59:49.401131  542642 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 10:59:49.418088  542642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1028 10:59:49.434953  542642 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1028 10:59:49.438558  542642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 10:59:49.449288  542642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 10:59:49.524974  542642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 10:59:49.538071  542642 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472 for IP: 192.168.49.2
	I1028 10:59:49.538097  542642 certs.go:194] generating shared ca certs ...
	I1028 10:59:49.538115  542642 certs.go:226] acquiring lock for ca certs: {Name:mk4f171b5fc82d02323944775bf27bfd4cb01f5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:49.538236  542642 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-533928/.minikube/ca.key
	I1028 10:59:49.639824  542642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-533928/.minikube/ca.crt ...
	I1028 10:59:49.639868  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/ca.crt: {Name:mkd44132e8612cfbcdb9b8d86b1fe1f676ffdeda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:49.640072  542642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-533928/.minikube/ca.key ...
	I1028 10:59:49.640085  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/ca.key: {Name:mkf14aff199e8845f01b8ea4c55bad99ed133239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:49.640162  542642 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-533928/.minikube/proxy-client-ca.key
	I1028 10:59:49.852851  542642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-533928/.minikube/proxy-client-ca.crt ...
	I1028 10:59:49.852887  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/proxy-client-ca.crt: {Name:mkb535181adba9fa3c17366069da7c4c211ab9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:49.853064  542642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-533928/.minikube/proxy-client-ca.key ...
	I1028 10:59:49.853076  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/proxy-client-ca.key: {Name:mkbdd5c2c2158f7023fd6059f943bbe4bae61b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:49.853951  542642 certs.go:256] generating profile certs ...
	I1028 10:59:49.854046  542642 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.key
	I1028 10:59:49.854064  542642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt with IP's: []
	I1028 10:59:49.966208  542642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt ...
	I1028 10:59:49.966247  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: {Name:mk1a91296c0a0584dfd795afde0cd6124b219b2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:49.966457  542642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.key ...
	I1028 10:59:49.966473  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.key: {Name:mke76e2304da30762701329588da4e12fcf058eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:49.966569  542642 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.key.d3d5ad56
	I1028 10:59:49.966595  542642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.crt.d3d5ad56 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1028 10:59:50.066384  542642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.crt.d3d5ad56 ...
	I1028 10:59:50.066419  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.crt.d3d5ad56: {Name:mk26f4eae40046e2f9760be0736db8a4cf2aed4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:50.066619  542642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.key.d3d5ad56 ...
	I1028 10:59:50.066643  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.key.d3d5ad56: {Name:mke6dba6ec7cc173445d41f08f411d73cd4c6923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:50.066750  542642 certs.go:381] copying /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.crt.d3d5ad56 -> /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.crt
	I1028 10:59:50.066847  542642 certs.go:385] copying /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.key.d3d5ad56 -> /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.key
	I1028 10:59:50.066911  542642 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.key
	I1028 10:59:50.066937  542642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.crt with IP's: []
	I1028 10:59:50.225084  542642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.crt ...
	I1028 10:59:50.225117  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.crt: {Name:mk8517bcb80f98969c02fde259782834ba3d7d1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:50.225299  542642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.key ...
	I1028 10:59:50.225316  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.key: {Name:mk1574fd3c7aa35b7c3a8015ad57972a01c86130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:50.225597  542642 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 10:59:50.225651  542642 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca.pem (1078 bytes)
	I1028 10:59:50.225688  542642 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/cert.pem (1123 bytes)
	I1028 10:59:50.225724  542642 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/key.pem (1675 bytes)
	I1028 10:59:50.226429  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 10:59:50.250065  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 10:59:50.272840  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 10:59:50.297142  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 10:59:50.319157  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 10:59:50.342951  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 10:59:50.367065  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 10:59:50.389802  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 10:59:50.412522  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 10:59:50.435751  542642 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 10:59:50.452937  542642 ssh_runner.go:195] Run: openssl version
	I1028 10:59:50.458336  542642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 10:59:50.467859  542642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 10:59:50.471462  542642 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:59 /usr/share/ca-certificates/minikubeCA.pem
	I1028 10:59:50.471530  542642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 10:59:50.478466  542642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 10:59:50.487526  542642 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 10:59:50.490685  542642 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 10:59:50.490735  542642 kubeadm.go:392] StartCluster: {Name:addons-673472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-673472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:59:50.490823  542642 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 10:59:50.490883  542642 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 10:59:50.525092  542642 cri.go:89] found id: ""
	I1028 10:59:50.525168  542642 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 10:59:50.533742  542642 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 10:59:50.542479  542642 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1028 10:59:50.542541  542642 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 10:59:50.551003  542642 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 10:59:50.551027  542642 kubeadm.go:157] found existing configuration files:
	
	I1028 10:59:50.551080  542642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 10:59:50.558870  542642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 10:59:50.558941  542642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 10:59:50.566564  542642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 10:59:50.574624  542642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 10:59:50.574686  542642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 10:59:50.582723  542642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 10:59:50.591122  542642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 10:59:50.591178  542642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 10:59:50.599560  542642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 10:59:50.608182  542642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 10:59:50.608236  542642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 10:59:50.616624  542642 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1028 10:59:50.671384  542642 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-gcp\n", err: exit status 1
	I1028 10:59:50.723737  542642 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 10:59:59.460393  542642 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 10:59:59.460471  542642 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 10:59:59.460584  542642 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1028 10:59:59.460664  542642 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-gcp
	I1028 10:59:59.460700  542642 kubeadm.go:310] OS: Linux
	I1028 10:59:59.460753  542642 kubeadm.go:310] CGROUPS_CPU: enabled
	I1028 10:59:59.460837  542642 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1028 10:59:59.460886  542642 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1028 10:59:59.460922  542642 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1028 10:59:59.460965  542642 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1028 10:59:59.461030  542642 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1028 10:59:59.461095  542642 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1028 10:59:59.461153  542642 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1028 10:59:59.461194  542642 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1028 10:59:59.461267  542642 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 10:59:59.461412  542642 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 10:59:59.461527  542642 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 10:59:59.461595  542642 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 10:59:59.464242  542642 out.go:235]   - Generating certificates and keys ...
	I1028 10:59:59.464338  542642 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 10:59:59.464395  542642 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 10:59:59.464494  542642 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 10:59:59.464578  542642 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 10:59:59.464659  542642 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 10:59:59.464773  542642 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 10:59:59.464870  542642 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 10:59:59.464989  542642 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-673472 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1028 10:59:59.465070  542642 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 10:59:59.465204  542642 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-673472 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1028 10:59:59.465311  542642 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 10:59:59.465425  542642 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 10:59:59.465496  542642 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 10:59:59.465556  542642 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 10:59:59.465619  542642 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 10:59:59.465683  542642 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 10:59:59.465762  542642 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 10:59:59.465855  542642 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 10:59:59.465939  542642 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 10:59:59.466049  542642 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 10:59:59.466148  542642 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 10:59:59.467513  542642 out.go:235]   - Booting up control plane ...
	I1028 10:59:59.467597  542642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 10:59:59.467678  542642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 10:59:59.467747  542642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 10:59:59.467853  542642 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 10:59:59.467939  542642 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 10:59:59.467976  542642 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 10:59:59.468090  542642 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 10:59:59.468181  542642 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 10:59:59.468233  542642 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.60969ms
	I1028 10:59:59.468327  542642 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 10:59:59.468422  542642 kubeadm.go:310] [api-check] The API server is healthy after 4.502341846s
	I1028 10:59:59.468553  542642 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 10:59:59.468704  542642 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 10:59:59.468785  542642 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 10:59:59.468971  542642 kubeadm.go:310] [mark-control-plane] Marking the node addons-673472 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 10:59:59.469113  542642 kubeadm.go:310] [bootstrap-token] Using token: s6hekf.p6us0uvpwrt54ii9
	I1028 10:59:59.470775  542642 out.go:235]   - Configuring RBAC rules ...
	I1028 10:59:59.470918  542642 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 10:59:59.471027  542642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 10:59:59.471211  542642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 10:59:59.471402  542642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 10:59:59.471522  542642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 10:59:59.471626  542642 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 10:59:59.471854  542642 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 10:59:59.471911  542642 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 10:59:59.471957  542642 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 10:59:59.471964  542642 kubeadm.go:310] 
	I1028 10:59:59.472023  542642 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 10:59:59.472037  542642 kubeadm.go:310] 
	I1028 10:59:59.472239  542642 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 10:59:59.472264  542642 kubeadm.go:310] 
	I1028 10:59:59.472380  542642 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 10:59:59.472508  542642 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 10:59:59.472597  542642 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 10:59:59.472609  542642 kubeadm.go:310] 
	I1028 10:59:59.472702  542642 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 10:59:59.472721  542642 kubeadm.go:310] 
	I1028 10:59:59.472812  542642 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 10:59:59.472822  542642 kubeadm.go:310] 
	I1028 10:59:59.472901  542642 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 10:59:59.473004  542642 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 10:59:59.473103  542642 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 10:59:59.473116  542642 kubeadm.go:310] 
	I1028 10:59:59.473248  542642 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 10:59:59.473323  542642 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 10:59:59.473329  542642 kubeadm.go:310] 
	I1028 10:59:59.473392  542642 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token s6hekf.p6us0uvpwrt54ii9 \
	I1028 10:59:59.473472  542642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:22f0f7d5663ef838083a14a9e686edb004104fc5a60ae6df0f45c5a76351185e \
	I1028 10:59:59.473491  542642 kubeadm.go:310] 	--control-plane 
	I1028 10:59:59.473497  542642 kubeadm.go:310] 
	I1028 10:59:59.473572  542642 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 10:59:59.473586  542642 kubeadm.go:310] 
	I1028 10:59:59.473686  542642 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s6hekf.p6us0uvpwrt54ii9 \
	I1028 10:59:59.473848  542642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:22f0f7d5663ef838083a14a9e686edb004104fc5a60ae6df0f45c5a76351185e 
	I1028 10:59:59.473867  542642 cni.go:84] Creating CNI manager for ""
	I1028 10:59:59.473879  542642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 10:59:59.475888  542642 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 10:59:59.477334  542642 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 10:59:59.482009  542642 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 10:59:59.482040  542642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 10:59:59.500964  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 10:59:59.709902  542642 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 10:59:59.709975  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:59:59.709977  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-673472 minikube.k8s.io/updated_at=2024_10_28T10_59_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=addons-673472 minikube.k8s.io/primary=true
	I1028 10:59:59.827958  542642 ops.go:34] apiserver oom_adj: -16
	I1028 10:59:59.827991  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:00.328362  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:00.828250  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:01.328889  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:01.828850  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:02.328156  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:02.828817  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:03.328466  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:03.828414  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:03.927985  542642 kubeadm.go:1113] duration metric: took 4.218075435s to wait for elevateKubeSystemPrivileges
	I1028 11:00:03.928025  542642 kubeadm.go:394] duration metric: took 13.43729589s to StartCluster
	I1028 11:00:03.928051  542642 settings.go:142] acquiring lock: {Name:mk4b7cc0753ef8271ffd0ab99530eca53ed30f8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:00:03.928255  542642 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-533928/kubeconfig
	I1028 11:00:03.928771  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/kubeconfig: {Name:mk7ef4f3d61e5f33766771edfad48c83b564ef6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:00:03.928998  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 11:00:03.929046  542642 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:00:03.929156  542642 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1028 11:00:03.929291  542642 addons.go:69] Setting yakd=true in profile "addons-673472"
	I1028 11:00:03.929311  542642 config.go:182] Loaded profile config "addons-673472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:00:03.929304  542642 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-673472"
	I1028 11:00:03.929322  542642 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-673472"
	I1028 11:00:03.929335  542642 addons.go:69] Setting volcano=true in profile "addons-673472"
	I1028 11:00:03.929346  542642 addons.go:234] Setting addon volcano=true in "addons-673472"
	I1028 11:00:03.929332  542642 addons.go:69] Setting storage-provisioner=true in profile "addons-673472"
	I1028 11:00:03.929361  542642 addons.go:69] Setting ingress-dns=true in profile "addons-673472"
	I1028 11:00:03.929371  542642 addons.go:234] Setting addon storage-provisioner=true in "addons-673472"
	I1028 11:00:03.929373  542642 addons.go:69] Setting gcp-auth=true in profile "addons-673472"
	I1028 11:00:03.929364  542642 addons.go:69] Setting default-storageclass=true in profile "addons-673472"
	I1028 11:00:03.929384  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929386  542642 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-673472"
	I1028 11:00:03.929393  542642 mustload.go:65] Loading cluster: addons-673472
	I1028 11:00:03.929375  542642 addons.go:234] Setting addon ingress-dns=true in "addons-673472"
	I1028 11:00:03.929399  542642 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-673472"
	I1028 11:00:03.929411  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929425  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929417  542642 addons.go:69] Setting inspektor-gadget=true in profile "addons-673472"
	I1028 11:00:03.929431  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929440  542642 addons.go:234] Setting addon inspektor-gadget=true in "addons-673472"
	I1028 11:00:03.929468  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929526  542642 config.go:182] Loaded profile config "addons-673472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:00:03.929764  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.929839  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.929908  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.929934  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.929935  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.929948  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.930127  542642 addons.go:69] Setting cloud-spanner=true in profile "addons-673472"
	I1028 11:00:03.930152  542642 addons.go:234] Setting addon cloud-spanner=true in "addons-673472"
	I1028 11:00:03.930186  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.930184  542642 addons.go:69] Setting metrics-server=true in profile "addons-673472"
	I1028 11:00:03.930256  542642 addons.go:234] Setting addon metrics-server=true in "addons-673472"
	I1028 11:00:03.930332  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.930641  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.930941  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.929350  542642 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-673472"
	I1028 11:00:03.931298  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929392  542642 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-673472"
	I1028 11:00:03.932095  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.932408  542642 addons.go:69] Setting volumesnapshots=true in profile "addons-673472"
	I1028 11:00:03.932473  542642 addons.go:234] Setting addon volumesnapshots=true in "addons-673472"
	I1028 11:00:03.929316  542642 addons.go:234] Setting addon yakd=true in "addons-673472"
	I1028 11:00:03.932523  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.932584  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.933046  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.933213  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.933240  542642 out.go:177] * Verifying Kubernetes components...
	I1028 11:00:03.929324  542642 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-673472"
	I1028 11:00:03.933348  542642 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-673472"
	I1028 11:00:03.933599  542642 addons.go:69] Setting registry=true in profile "addons-673472"
	I1028 11:00:03.933629  542642 addons.go:234] Setting addon registry=true in "addons-673472"
	I1028 11:00:03.933665  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929354  542642 addons.go:69] Setting ingress=true in profile "addons-673472"
	I1028 11:00:03.935533  542642 addons.go:234] Setting addon ingress=true in "addons-673472"
	I1028 11:00:03.935595  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929374  542642 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-673472"
	I1028 11:00:03.935939  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.932486  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.942401  542642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:00:03.953361  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.953371  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.953453  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.953459  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.962021  542642 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:00:03.965883  542642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:00:03.965914  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:00:03.965990  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:03.972247  542642 host.go:66] Checking if "addons-673472" exists ...
	W1028 11:00:03.974727  542642 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1028 11:00:03.977323  542642 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1028 11:00:03.978481  542642 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1028 11:00:03.978551  542642 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1028 11:00:03.978889  542642 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1028 11:00:03.978914  542642 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1028 11:00:03.978976  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:03.983666  542642 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 11:00:03.983960  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1028 11:00:03.984046  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:03.983728  542642 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 11:00:03.984832  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1028 11:00:03.984896  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:03.986121  542642 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1028 11:00:03.989150  542642 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1028 11:00:03.989174  542642 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1028 11:00:03.989236  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:03.996927  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1028 11:00:03.998673  542642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1028 11:00:03.998743  542642 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1028 11:00:03.998817  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.025876  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.026038  542642 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1028 11:00:04.031799  542642 out.go:177]   - Using image docker.io/registry:2.8.3
	I1028 11:00:04.031955  542642 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1028 11:00:04.032005  542642 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1028 11:00:04.033861  542642 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1028 11:00:04.033890  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1028 11:00:04.033954  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.034344  542642 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 11:00:04.034361  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1028 11:00:04.034411  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.034764  542642 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 11:00:04.034779  542642 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 11:00:04.034838  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.044968  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.045750  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.052954  542642 addons.go:234] Setting addon default-storageclass=true in "addons-673472"
	I1028 11:00:04.053016  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:04.053456  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:04.056887  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.062397  542642 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-673472"
	I1028 11:00:04.062452  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:04.062942  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:04.064878  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.067610  542642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1028 11:00:04.068247  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.068985  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1028 11:00:04.070493  542642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 11:00:04.070580  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1028 11:00:04.073569  542642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 11:00:04.073655  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1028 11:00:04.076838  542642 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 11:00:04.076878  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1028 11:00:04.076993  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.077626  542642 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1028 11:00:04.079171  542642 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1028 11:00:04.079188  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1028 11:00:04.079240  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.079406  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1028 11:00:04.080887  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1028 11:00:04.082292  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1028 11:00:04.082838  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.084942  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1028 11:00:04.086241  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1028 11:00:04.087505  542642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1028 11:00:04.087526  542642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1028 11:00:04.087604  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.093486  542642 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:00:04.093511  542642 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:00:04.093574  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.095476  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.096793  542642 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1028 11:00:04.098375  542642 out.go:177]   - Using image docker.io/busybox:stable
	I1028 11:00:04.099672  542642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 11:00:04.099696  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1028 11:00:04.099761  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.100944  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.114810  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.118312  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.120239  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.139369  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.144237  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	W1028 11:00:04.207955  542642 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1028 11:00:04.207999  542642 retry.go:31] will retry after 271.063266ms: ssh: handshake failed: EOF
	I1028 11:00:04.227147  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 11:00:04.321827  542642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:00:04.333640  542642 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1028 11:00:04.333672  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1028 11:00:04.508853  542642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1028 11:00:04.508937  542642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1028 11:00:04.512691  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 11:00:04.519934  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1028 11:00:04.520569  542642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 11:00:04.520594  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1028 11:00:04.605692  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1028 11:00:04.618926  542642 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1028 11:00:04.618960  542642 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1028 11:00:04.627789  542642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1028 11:00:04.627820  542642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1028 11:00:04.706319  542642 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1028 11:00:04.706415  542642 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1028 11:00:04.710259  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 11:00:04.718156  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 11:00:04.726716  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 11:00:04.806618  542642 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1028 11:00:04.806721  542642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1028 11:00:04.810411  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:00:04.812556  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:00:04.820399  542642 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1028 11:00:04.820428  542642 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1028 11:00:04.907872  542642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1028 11:00:04.907905  542642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1028 11:00:05.017060  542642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 11:00:05.017147  542642 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 11:00:05.022649  542642 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1028 11:00:05.022734  542642 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1028 11:00:05.119250  542642 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1028 11:00:05.119334  542642 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1028 11:00:05.126092  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 11:00:05.207687  542642 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1028 11:00:05.207733  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1028 11:00:05.410039  542642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1028 11:00:05.410086  542642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1028 11:00:05.425916  542642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 11:00:05.425953  542642 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 11:00:05.521360  542642 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1028 11:00:05.521390  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1028 11:00:05.610953  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1028 11:00:05.828153  542642 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 11:00:05.828181  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1028 11:00:06.013272  542642 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.786075969s)
	I1028 11:00:06.013525  542642 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1028 11:00:06.013494  542642 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.691512759s)
	I1028 11:00:06.014835  542642 node_ready.go:35] waiting up to 6m0s for node "addons-673472" to be "Ready" ...
	I1028 11:00:06.025963  542642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1028 11:00:06.026008  542642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1028 11:00:06.113675  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 11:00:06.128466  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1028 11:00:06.410201  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 11:00:06.715954  542642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1028 11:00:06.715991  542642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1028 11:00:06.811583  542642 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-673472" context rescaled to 1 replicas
	I1028 11:00:07.226692  542642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1028 11:00:07.226725  542642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1028 11:00:07.706702  542642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1028 11:00:07.706749  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1028 11:00:07.919529  542642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1028 11:00:07.919659  542642 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1028 11:00:08.016143  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.503416177s)
	I1028 11:00:08.024375  542642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1028 11:00:08.024461  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1028 11:00:08.029041  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:08.218644  542642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1028 11:00:08.218672  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1028 11:00:08.330261  542642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 11:00:08.330292  542642 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1028 11:00:08.606760  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 11:00:08.906361  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.386307705s)
	I1028 11:00:08.906662  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.196323131s)
	I1028 11:00:08.906727  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.300876329s)
	I1028 11:00:10.023181  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.304981234s)
	I1028 11:00:10.023228  542642 addons.go:475] Verifying addon ingress=true in "addons-673472"
	I1028 11:00:10.023230  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.296476835s)
	I1028 11:00:10.023354  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.210771619s)
	I1028 11:00:10.023323  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.212822964s)
	I1028 11:00:10.023615  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.897444273s)
	I1028 11:00:10.023776  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.412725651s)
	I1028 11:00:10.023797  542642 addons.go:475] Verifying addon registry=true in "addons-673472"
	I1028 11:00:10.024106  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.910390692s)
	I1028 11:00:10.024166  542642 addons.go:475] Verifying addon metrics-server=true in "addons-673472"
	I1028 11:00:10.024204  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.895689846s)
	I1028 11:00:10.025078  542642 out.go:177] * Verifying registry addon...
	I1028 11:00:10.025112  542642 out.go:177] * Verifying ingress addon...
	I1028 11:00:10.026171  542642 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-673472 service yakd-dashboard -n yakd-dashboard
	
	I1028 11:00:10.028979  542642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1028 11:00:10.029010  542642 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1028 11:00:10.037917  542642 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1028 11:00:10.037939  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:10.038361  542642 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1028 11:00:10.038383  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1028 11:00:10.040104  542642 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1028 11:00:10.525217  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:10.611986  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:10.613144  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:10.748755  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.338472038s)
	W1028 11:00:10.748844  542642 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 11:00:10.748875  542642 retry.go:31] will retry after 133.009364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 11:00:10.882487  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 11:00:11.032534  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:11.033232  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:11.210423  542642 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1028 11:00:11.210510  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:11.240863  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:11.332257  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.725366649s)
	I1028 11:00:11.332305  542642 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-673472"
	I1028 11:00:11.333950  542642 out.go:177] * Verifying csi-hostpath-driver addon...
	I1028 11:00:11.336375  542642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1028 11:00:11.342096  542642 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1028 11:00:11.342124  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:11.423475  542642 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1028 11:00:11.440624  542642 addons.go:234] Setting addon gcp-auth=true in "addons-673472"
	I1028 11:00:11.440716  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:11.441129  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:11.457258  542642 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1028 11:00:11.457325  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:11.474228  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:11.533017  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:11.533428  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:11.839750  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:12.031897  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:12.032437  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:12.340276  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:12.533142  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:12.533589  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:12.840428  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:13.018029  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:13.033083  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:13.033461  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:13.340072  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:13.532568  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:13.533225  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:13.709023  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.826484109s)
	I1028 11:00:13.709118  542642 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.251824215s)
	I1028 11:00:13.711424  542642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 11:00:13.712924  542642 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1028 11:00:13.714330  542642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1028 11:00:13.714371  542642 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1028 11:00:13.733018  542642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1028 11:00:13.733053  542642 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1028 11:00:13.752013  542642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 11:00:13.752036  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1028 11:00:13.769564  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 11:00:13.839914  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:14.032298  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:14.032517  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:14.127065  542642 addons.go:475] Verifying addon gcp-auth=true in "addons-673472"
	I1028 11:00:14.128722  542642 out.go:177] * Verifying gcp-auth addon...
	I1028 11:00:14.130951  542642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1028 11:00:14.133760  542642 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1028 11:00:14.133782  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:14.340285  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:14.532982  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:14.533288  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:14.634613  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:14.840143  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:15.019148  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:15.032448  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:15.032958  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:15.134336  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:15.340879  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:15.533080  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:15.533303  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:15.634764  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:15.839750  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:16.031926  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:16.032413  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:16.135004  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:16.340373  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:16.532536  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:16.532982  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:16.634735  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:16.839939  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:17.032448  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:17.032749  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:17.134502  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:17.340641  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:17.518436  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:17.532411  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:17.532793  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:17.634354  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:17.840938  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:18.032048  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:18.032293  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:18.134554  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:18.339833  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:18.532357  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:18.532762  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:18.634210  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:18.840690  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:19.032462  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:19.032641  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:19.134168  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:19.340685  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:19.532255  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:19.532666  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:19.634194  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:19.841622  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:20.018987  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:20.032348  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:20.032674  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:20.134019  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:20.340070  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:20.532422  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:20.532994  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:20.634357  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:20.840227  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:21.032618  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:21.033102  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:21.134873  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:21.339907  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:21.532108  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:21.532579  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:21.635127  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:21.840405  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:22.032819  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:22.033326  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:22.135122  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:22.339809  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:22.518643  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:22.532568  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:22.533242  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:22.634588  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:22.841585  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:23.032631  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:23.033127  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:23.134959  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:23.339960  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:23.532979  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:23.533340  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:23.635104  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:23.840555  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:24.031958  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:24.032313  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:24.135208  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:24.341021  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:24.518779  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:24.532954  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:24.533358  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:24.634814  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:24.840281  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:25.033068  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:25.033459  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:25.137076  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:25.340495  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:25.532613  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:25.532961  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:25.634334  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:25.840534  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:26.033051  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:26.033303  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:26.134656  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:26.340021  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:26.519479  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:26.532948  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:26.533361  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:26.634677  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:26.840102  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:27.032835  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:27.033220  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:27.134588  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:27.340305  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:27.533159  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:27.533609  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:27.634829  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:27.840239  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:28.032417  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:28.033104  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:28.134251  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:28.340451  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:28.533263  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:28.533794  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:28.634529  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:28.840852  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:29.018852  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:29.032228  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:29.032824  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:29.134287  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:29.340648  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:29.533084  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:29.533408  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:29.635087  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:29.840447  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:30.032021  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:30.032563  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:30.135111  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:30.340261  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:30.532981  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:30.533379  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:30.634986  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:30.841062  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:31.020622  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:31.032001  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:31.032707  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:31.135107  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:31.340063  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:31.532892  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:31.533331  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:31.634666  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:31.839664  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:32.032046  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:32.032349  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:32.134668  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:32.339703  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:32.532501  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:32.533004  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:32.634565  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:32.839946  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:33.032104  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:33.032533  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:33.133887  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:33.339750  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:33.521653  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:33.532019  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:33.532404  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:33.634455  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:33.840362  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:34.031751  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:34.032123  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:34.134704  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:34.339920  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:34.532578  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:34.532934  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:34.634561  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:34.839727  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:35.032216  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:35.032922  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:35.134202  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:35.340390  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:35.533051  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:35.533473  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:35.634951  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:35.840086  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:36.018743  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:36.032357  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:36.032784  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:36.134355  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:36.340342  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:36.532992  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:36.533376  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:36.634655  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:36.840006  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:37.032478  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:37.033022  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:37.134411  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:37.340588  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:37.532455  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:37.532816  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:37.634265  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:37.840402  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:38.032036  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:38.032463  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:38.134650  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:38.339611  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:38.518268  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:38.532379  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:38.532934  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:38.634357  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:38.840381  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:39.032763  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:39.033084  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:39.134336  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:39.340533  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:39.532865  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:39.533246  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:39.634678  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:39.839778  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:40.032044  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:40.032580  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:40.134883  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:40.339856  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:40.518324  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:40.532250  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:40.532693  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:40.634543  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:40.839924  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:41.031724  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:41.032109  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:41.134925  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:41.340234  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:41.532874  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:41.533484  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:41.634923  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:41.839972  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:42.032340  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:42.033008  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:42.134638  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:42.339653  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:42.518410  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:42.532308  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:42.532744  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:42.634174  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:42.840369  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:43.032777  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:43.033157  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:43.134517  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:43.340512  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:43.533090  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:43.533479  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:43.634890  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:43.840070  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:44.032473  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:44.033044  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:44.133729  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:44.339731  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:44.518656  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:44.532616  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:44.533293  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:44.634675  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:44.839609  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:45.031690  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:45.032408  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:45.134577  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:45.339862  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:45.533001  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:45.533328  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:45.634820  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:45.840147  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:46.032680  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:46.033230  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:46.134542  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:46.339620  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:46.532313  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:46.533065  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:46.634276  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:46.840571  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:47.018057  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:47.032700  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:47.033201  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:47.134712  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:47.339996  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:47.533566  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:47.535158  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:47.634705  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:47.840323  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:48.032891  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:48.033554  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:48.135125  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:48.341082  542642 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1028 11:00:48.341103  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:48.519270  542642 node_ready.go:49] node "addons-673472" has status "Ready":"True"
	I1028 11:00:48.519364  542642 node_ready.go:38] duration metric: took 42.504448997s for node "addons-673472" to be "Ready" ...
	I1028 11:00:48.519383  542642 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:00:48.532873  542642 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace to be "Ready" ...
	I1028 11:00:48.538600  542642 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1028 11:00:48.538625  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:48.540010  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:48.634740  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:48.840061  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:49.037329  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:49.037959  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:49.137704  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:49.341158  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:49.533448  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:49.533769  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:49.635369  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:49.842214  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:50.033503  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:50.033994  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:50.212249  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:50.409722  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:50.534110  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:50.535121  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:50.611537  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:00:50.635531  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:50.842955  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:51.034283  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:51.035568  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:51.135341  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:51.342518  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:51.532715  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:51.533127  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:51.634642  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:51.841910  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:52.033703  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:52.034134  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:52.135254  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:52.341350  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:52.533074  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:52.533473  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:52.634985  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:52.841191  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:53.032432  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:53.032929  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:53.038967  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:00:53.134977  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:53.342086  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:53.534117  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:53.534303  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:53.634596  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:53.842141  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:54.033150  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:54.033798  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:54.134916  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:54.340966  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:54.533868  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:54.534054  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:54.634837  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:54.841351  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:55.033162  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:55.033629  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:55.135251  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:55.343069  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:55.533683  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:55.534011  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:55.538521  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:00:55.634614  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:55.841853  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:56.033113  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:56.033323  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:56.134286  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:56.342244  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:56.534343  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:56.534563  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:56.635087  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:56.841851  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:57.033870  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:57.034195  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:57.135142  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:57.343291  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:57.533146  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:57.533355  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:57.539523  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:00:57.635013  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:57.843413  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:58.032675  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:58.033181  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:58.135325  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:58.341465  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:58.533138  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:58.533305  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:58.634383  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:58.841730  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:59.032925  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:59.033298  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:59.135767  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:59.340854  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:59.534406  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:59.534730  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:59.634079  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:59.841665  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:00.033109  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:00.033265  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:00.038431  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:00.135379  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:00.341771  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:00.533825  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:00.534007  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:00.635490  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:00.841832  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:01.033296  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:01.033699  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:01.134828  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:01.340920  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:01.535910  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:01.536068  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:01.634991  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:01.840999  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:02.033058  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:02.033403  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:02.134186  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:02.341390  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:02.533200  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:02.533519  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:02.538205  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:02.634199  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:02.841616  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:03.032687  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:03.033026  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:03.135053  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:03.341963  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:03.534040  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:03.534255  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:03.635173  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:03.840876  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:04.033154  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:04.033687  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:04.137514  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:04.341739  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:04.533516  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:04.533882  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:04.539188  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:04.635283  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:04.841442  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:05.032729  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:05.033124  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:05.136681  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:05.340583  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:05.533528  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:05.533935  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:05.634746  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:05.841153  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:06.033894  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:06.034118  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:06.135809  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:06.341845  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:06.532772  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:06.533071  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:06.634900  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:06.840948  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:07.033499  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:07.033971  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:07.038708  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:07.134489  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:07.341453  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:07.533460  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:07.534078  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:07.634752  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:07.841145  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:08.032456  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:08.032600  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:08.134493  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:08.341228  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:08.532960  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:08.533418  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:08.634946  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:08.841209  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:09.032428  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:09.032860  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:09.133910  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:09.340994  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:09.533027  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:09.533325  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:09.538036  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:09.635269  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:09.842042  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:10.033156  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:10.034120  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:10.135121  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:10.409633  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:10.612232  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:10.613683  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:10.708956  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:10.911211  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:11.109514  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:11.111442  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:11.217964  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:11.342875  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:11.534357  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:11.536302  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:11.539401  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:11.634349  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:11.841983  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:12.034345  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:12.035360  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:12.135368  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:12.342090  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:12.534830  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:12.535385  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:12.635476  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:12.841981  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:13.033585  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:13.034659  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:13.135255  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:13.341464  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:13.534090  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:13.534423  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:13.634870  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:13.840481  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:14.033127  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:14.033709  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:14.038134  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:14.134521  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:14.341478  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:14.533571  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:14.533775  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:14.634858  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:14.840881  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:15.033065  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:15.033576  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:15.135105  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:15.341248  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:15.533403  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:15.533889  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:15.634506  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:15.841862  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:16.033448  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:16.033889  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:16.038574  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:16.134315  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:16.341749  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:16.533707  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:16.534255  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:16.635247  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:16.841160  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:17.032355  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:17.032720  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:17.135060  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:17.341269  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:17.533498  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:17.533806  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:17.538084  542642 pod_ready.go:93] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"True"
	I1028 11:01:17.538109  542642 pod_ready.go:82] duration metric: took 29.005201782s for pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.538121  542642 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-67wn8" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.542973  542642 pod_ready.go:93] pod "coredns-7c65d6cfc9-67wn8" in "kube-system" namespace has status "Ready":"True"
	I1028 11:01:17.542994  542642 pod_ready.go:82] duration metric: took 4.866917ms for pod "coredns-7c65d6cfc9-67wn8" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.543018  542642 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.547786  542642 pod_ready.go:93] pod "etcd-addons-673472" in "kube-system" namespace has status "Ready":"True"
	I1028 11:01:17.547816  542642 pod_ready.go:82] duration metric: took 4.791299ms for pod "etcd-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.547829  542642 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.552584  542642 pod_ready.go:93] pod "kube-apiserver-addons-673472" in "kube-system" namespace has status "Ready":"True"
	I1028 11:01:17.552607  542642 pod_ready.go:82] duration metric: took 4.769593ms for pod "kube-apiserver-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.552621  542642 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.557407  542642 pod_ready.go:93] pod "kube-controller-manager-addons-673472" in "kube-system" namespace has status "Ready":"True"
	I1028 11:01:17.557431  542642 pod_ready.go:82] duration metric: took 4.801768ms for pod "kube-controller-manager-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.557447  542642 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bx7gb" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.634735  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:17.842241  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:17.936923  542642 pod_ready.go:93] pod "kube-proxy-bx7gb" in "kube-system" namespace has status "Ready":"True"
	I1028 11:01:17.936950  542642 pod_ready.go:82] duration metric: took 379.494749ms for pod "kube-proxy-bx7gb" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.936965  542642 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:18.032935  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:18.033167  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:18.134835  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:18.336919  542642 pod_ready.go:93] pod "kube-scheduler-addons-673472" in "kube-system" namespace has status "Ready":"True"
	I1028 11:01:18.336944  542642 pod_ready.go:82] duration metric: took 399.970822ms for pod "kube-scheduler-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:18.336956  542642 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:18.340532  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:18.534909  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:18.536030  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:18.635514  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:18.845416  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:19.033842  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:19.034226  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:19.134919  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:19.341639  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:19.533828  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:19.534372  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:19.634892  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:19.840581  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:20.033493  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:20.033880  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:20.134191  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:20.342213  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:20.343655  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:20.533812  542642 kapi.go:107] duration metric: took 1m10.504828753s to wait for kubernetes.io/minikube-addons=registry ...
	I1028 11:01:20.534024  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:20.634168  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:20.841313  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:21.032943  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:21.134466  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:21.342676  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:21.534559  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:21.635111  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:21.842094  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:22.033536  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:22.135157  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:22.344346  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:22.345191  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:22.533924  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:22.634477  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:22.841611  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:23.033994  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:23.135350  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:23.341871  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:23.535034  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:23.635334  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:23.841785  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:24.032671  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:24.135058  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:24.341235  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:24.535538  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:24.635117  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:24.841542  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:24.843062  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:25.035202  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:25.135226  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:25.341970  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:25.533683  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:25.635778  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:25.841215  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:26.034307  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:26.207043  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:26.508518  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:26.599567  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:26.811174  542642 kapi.go:107] duration metric: took 1m12.680214644s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1028 11:01:26.813201  542642 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-673472 cluster.
	I1028 11:01:26.814660  542642 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1028 11:01:26.816531  542642 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1028 11:01:26.921598  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:26.921632  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:27.033951  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:27.409167  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:27.533133  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:27.841444  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:28.033365  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:28.342270  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:28.532917  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:28.845716  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:29.033131  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:29.341164  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:29.342966  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:29.534745  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:29.841647  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:30.033575  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:30.342079  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:30.533487  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:30.842021  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:31.034031  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:31.342128  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:31.343654  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:31.534495  542642 kapi.go:107] duration metric: took 1m21.505480203s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1028 11:01:31.931485  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:32.376419  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:32.841919  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:33.342170  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:33.841820  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:33.843460  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:34.341093  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:34.843097  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:35.341309  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:35.841956  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:36.346779  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:36.351204  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:36.841895  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:37.342406  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:37.854289  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:38.346854  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:38.842971  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:38.843508  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:39.344042  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:39.841491  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:40.341886  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:40.842246  542642 kapi.go:107] duration metric: took 1m29.505873588s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1028 11:01:40.843126  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:40.844354  542642 out.go:177] * Enabled addons: ingress-dns, inspektor-gadget, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1028 11:01:40.846010  542642 addons.go:510] duration metric: took 1m36.916862098s for enable addons: enabled=[ingress-dns inspektor-gadget amd-gpu-device-plugin cloud-spanner nvidia-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1028 11:01:42.843444  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:44.843554  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:47.343498  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:49.344350  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:51.842829  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:53.843560  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:55.843632  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:57.843974  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:02:00.343576  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:02:02.343881  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:02:04.843727  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:02:07.343564  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:02:08.844486  542642 pod_ready.go:93] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"True"
	I1028 11:02:08.844510  542642 pod_ready.go:82] duration metric: took 50.507548227s for pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace to be "Ready" ...
	I1028 11:02:08.844522  542642 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zktff" in "kube-system" namespace to be "Ready" ...
	I1028 11:02:08.848837  542642 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-zktff" in "kube-system" namespace has status "Ready":"True"
	I1028 11:02:08.848860  542642 pod_ready.go:82] duration metric: took 4.331704ms for pod "nvidia-device-plugin-daemonset-zktff" in "kube-system" namespace to be "Ready" ...
	I1028 11:02:08.848879  542642 pod_ready.go:39] duration metric: took 1m20.329480138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:02:08.848901  542642 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:02:08.848936  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 11:02:08.849006  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 11:02:08.885551  542642 cri.go:89] found id: "87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802"
	I1028 11:02:08.885583  542642 cri.go:89] found id: ""
	I1028 11:02:08.885595  542642 logs.go:282] 1 containers: [87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802]
	I1028 11:02:08.885647  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:08.889170  542642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 11:02:08.889235  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 11:02:08.936962  542642 cri.go:89] found id: "86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163"
	I1028 11:02:08.936987  542642 cri.go:89] found id: ""
	I1028 11:02:08.936999  542642 logs.go:282] 1 containers: [86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163]
	I1028 11:02:08.937062  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:08.940948  542642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 11:02:08.941022  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 11:02:08.977719  542642 cri.go:89] found id: "558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b"
	I1028 11:02:08.977746  542642 cri.go:89] found id: ""
	I1028 11:02:08.977754  542642 logs.go:282] 1 containers: [558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b]
	I1028 11:02:08.977798  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:08.981284  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 11:02:08.981345  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 11:02:09.016943  542642 cri.go:89] found id: "f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd"
	I1028 11:02:09.016973  542642 cri.go:89] found id: ""
	I1028 11:02:09.016983  542642 logs.go:282] 1 containers: [f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd]
	I1028 11:02:09.017045  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:09.020959  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 11:02:09.021063  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 11:02:09.058112  542642 cri.go:89] found id: "d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e"
	I1028 11:02:09.058134  542642 cri.go:89] found id: ""
	I1028 11:02:09.058142  542642 logs.go:282] 1 containers: [d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e]
	I1028 11:02:09.058206  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:09.061716  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 11:02:09.061790  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 11:02:09.099845  542642 cri.go:89] found id: "780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c"
	I1028 11:02:09.099873  542642 cri.go:89] found id: ""
	I1028 11:02:09.099883  542642 logs.go:282] 1 containers: [780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c]
	I1028 11:02:09.099951  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:09.103773  542642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 11:02:09.103866  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 11:02:09.141449  542642 cri.go:89] found id: "d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769"
	I1028 11:02:09.141473  542642 cri.go:89] found id: ""
	I1028 11:02:09.141484  542642 logs.go:282] 1 containers: [d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769]
	I1028 11:02:09.141537  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:09.145006  542642 logs.go:123] Gathering logs for kubelet ...
	I1028 11:02:09.145035  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 11:02:09.206966  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.163425    1632 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-673472' and this object
	W1028 11:02:09.207147  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.163484    1632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	W1028 11:02:09.207271  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.164087    1632 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-673472' and this object
	W1028 11:02:09.207422  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.164135    1632 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	I1028 11:02:09.234931  542642 logs.go:123] Gathering logs for dmesg ...
	I1028 11:02:09.234980  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 11:02:09.249356  542642 logs.go:123] Gathering logs for kube-apiserver [87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802] ...
	I1028 11:02:09.249394  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802"
	I1028 11:02:09.296227  542642 logs.go:123] Gathering logs for coredns [558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b] ...
	I1028 11:02:09.296271  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b"
	I1028 11:02:09.336130  542642 logs.go:123] Gathering logs for kube-scheduler [f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd] ...
	I1028 11:02:09.336179  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd"
	I1028 11:02:09.378871  542642 logs.go:123] Gathering logs for kube-proxy [d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e] ...
	I1028 11:02:09.378908  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e"
	I1028 11:02:09.414200  542642 logs.go:123] Gathering logs for kindnet [d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769] ...
	I1028 11:02:09.414233  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769"
	I1028 11:02:09.451220  542642 logs.go:123] Gathering logs for describe nodes ...
	I1028 11:02:09.451256  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 11:02:09.559383  542642 logs.go:123] Gathering logs for etcd [86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163] ...
	I1028 11:02:09.559421  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163"
	I1028 11:02:09.610227  542642 logs.go:123] Gathering logs for kube-controller-manager [780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c] ...
	I1028 11:02:09.610267  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c"
	I1028 11:02:09.671516  542642 logs.go:123] Gathering logs for CRI-O ...
	I1028 11:02:09.671558  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 11:02:09.743818  542642 logs.go:123] Gathering logs for container status ...
	I1028 11:02:09.743868  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 11:02:09.792271  542642 out.go:358] Setting ErrFile to fd 2...
	I1028 11:02:09.792316  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 11:02:09.792384  542642 out.go:270] X Problems detected in kubelet:
	W1028 11:02:09.792397  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.163425    1632 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-673472' and this object
	W1028 11:02:09.792410  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.163484    1632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	W1028 11:02:09.792423  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.164087    1632 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-673472' and this object
	W1028 11:02:09.792430  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.164135    1632 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	I1028 11:02:09.792436  542642 out.go:358] Setting ErrFile to fd 2...
	I1028 11:02:09.792443  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:02:19.793985  542642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:02:19.808359  542642 api_server.go:72] duration metric: took 2m15.879271272s to wait for apiserver process to appear ...
	I1028 11:02:19.808384  542642 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:02:19.808428  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 11:02:19.808480  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 11:02:19.842622  542642 cri.go:89] found id: "87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802"
	I1028 11:02:19.842654  542642 cri.go:89] found id: ""
	I1028 11:02:19.842666  542642 logs.go:282] 1 containers: [87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802]
	I1028 11:02:19.842744  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:19.846413  542642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 11:02:19.846489  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 11:02:19.882646  542642 cri.go:89] found id: "86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163"
	I1028 11:02:19.882680  542642 cri.go:89] found id: ""
	I1028 11:02:19.882692  542642 logs.go:282] 1 containers: [86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163]
	I1028 11:02:19.882743  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:19.886454  542642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 11:02:19.886519  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 11:02:19.922106  542642 cri.go:89] found id: "558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b"
	I1028 11:02:19.922128  542642 cri.go:89] found id: ""
	I1028 11:02:19.922138  542642 logs.go:282] 1 containers: [558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b]
	I1028 11:02:19.922194  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:19.925691  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 11:02:19.925763  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 11:02:19.960189  542642 cri.go:89] found id: "f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd"
	I1028 11:02:19.960212  542642 cri.go:89] found id: ""
	I1028 11:02:19.960219  542642 logs.go:282] 1 containers: [f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd]
	I1028 11:02:19.960264  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:19.963896  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 11:02:19.963963  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 11:02:19.999912  542642 cri.go:89] found id: "d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e"
	I1028 11:02:19.999937  542642 cri.go:89] found id: ""
	I1028 11:02:19.999945  542642 logs.go:282] 1 containers: [d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e]
	I1028 11:02:20.000005  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:20.003932  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 11:02:20.004014  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 11:02:20.042260  542642 cri.go:89] found id: "780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c"
	I1028 11:02:20.042289  542642 cri.go:89] found id: ""
	I1028 11:02:20.042298  542642 logs.go:282] 1 containers: [780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c]
	I1028 11:02:20.042353  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:20.046134  542642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 11:02:20.046197  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 11:02:20.082205  542642 cri.go:89] found id: "d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769"
	I1028 11:02:20.082236  542642 cri.go:89] found id: ""
	I1028 11:02:20.082246  542642 logs.go:282] 1 containers: [d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769]
	I1028 11:02:20.082305  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:20.086153  542642 logs.go:123] Gathering logs for container status ...
	I1028 11:02:20.086185  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 11:02:20.129314  542642 logs.go:123] Gathering logs for dmesg ...
	I1028 11:02:20.129353  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 11:02:20.143558  542642 logs.go:123] Gathering logs for describe nodes ...
	I1028 11:02:20.143595  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 11:02:20.246106  542642 logs.go:123] Gathering logs for etcd [86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163] ...
	I1028 11:02:20.246136  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163"
	I1028 11:02:20.293999  542642 logs.go:123] Gathering logs for coredns [558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b] ...
	I1028 11:02:20.294043  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b"
	I1028 11:02:20.332497  542642 logs.go:123] Gathering logs for kube-scheduler [f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd] ...
	I1028 11:02:20.332532  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd"
	I1028 11:02:20.373877  542642 logs.go:123] Gathering logs for kindnet [d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769] ...
	I1028 11:02:20.373915  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769"
	I1028 11:02:20.410599  542642 logs.go:123] Gathering logs for kubelet ...
	I1028 11:02:20.410631  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 11:02:20.468141  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.163425    1632 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-673472' and this object
	W1028 11:02:20.468316  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.163484    1632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	W1028 11:02:20.468439  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.164087    1632 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-673472' and this object
	W1028 11:02:20.468589  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.164135    1632 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	I1028 11:02:20.496348  542642 logs.go:123] Gathering logs for kube-apiserver [87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802] ...
	I1028 11:02:20.496384  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802"
	I1028 11:02:20.542990  542642 logs.go:123] Gathering logs for kube-proxy [d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e] ...
	I1028 11:02:20.543040  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e"
	I1028 11:02:20.578906  542642 logs.go:123] Gathering logs for kube-controller-manager [780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c] ...
	I1028 11:02:20.578944  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c"
	I1028 11:02:20.635286  542642 logs.go:123] Gathering logs for CRI-O ...
	I1028 11:02:20.635333  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 11:02:20.709226  542642 out.go:358] Setting ErrFile to fd 2...
	I1028 11:02:20.709267  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 11:02:20.709343  542642 out.go:270] X Problems detected in kubelet:
	W1028 11:02:20.709361  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.163425    1632 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-673472' and this object
	W1028 11:02:20.709376  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.163484    1632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	W1028 11:02:20.709390  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.164087    1632 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-673472' and this object
	W1028 11:02:20.709400  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.164135    1632 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	I1028 11:02:20.709412  542642 out.go:358] Setting ErrFile to fd 2...
	I1028 11:02:20.709422  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:02:30.710105  542642 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1028 11:02:30.715305  542642 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1028 11:02:30.716476  542642 api_server.go:141] control plane version: v1.31.2
	I1028 11:02:30.716507  542642 api_server.go:131] duration metric: took 10.908117119s to wait for apiserver health ...
	I1028 11:02:30.716516  542642 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:02:30.716544  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 11:02:30.716605  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 11:02:30.751826  542642 cri.go:89] found id: "87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802"
	I1028 11:02:30.751846  542642 cri.go:89] found id: ""
	I1028 11:02:30.751854  542642 logs.go:282] 1 containers: [87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802]
	I1028 11:02:30.751901  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:30.755324  542642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 11:02:30.755382  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 11:02:30.790166  542642 cri.go:89] found id: "86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163"
	I1028 11:02:30.790190  542642 cri.go:89] found id: ""
	I1028 11:02:30.790198  542642 logs.go:282] 1 containers: [86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163]
	I1028 11:02:30.790252  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:30.793841  542642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 11:02:30.793907  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 11:02:30.829656  542642 cri.go:89] found id: "558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b"
	I1028 11:02:30.829679  542642 cri.go:89] found id: ""
	I1028 11:02:30.829686  542642 logs.go:282] 1 containers: [558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b]
	I1028 11:02:30.829747  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:30.833316  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 11:02:30.833377  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 11:02:30.868057  542642 cri.go:89] found id: "f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd"
	I1028 11:02:30.868084  542642 cri.go:89] found id: ""
	I1028 11:02:30.868094  542642 logs.go:282] 1 containers: [f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd]
	I1028 11:02:30.868152  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:30.871825  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 11:02:30.871893  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 11:02:30.907336  542642 cri.go:89] found id: "d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e"
	I1028 11:02:30.907366  542642 cri.go:89] found id: ""
	I1028 11:02:30.907378  542642 logs.go:282] 1 containers: [d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e]
	I1028 11:02:30.907433  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:30.910919  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 11:02:30.910994  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 11:02:30.945929  542642 cri.go:89] found id: "780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c"
	I1028 11:02:30.945955  542642 cri.go:89] found id: ""
	I1028 11:02:30.945965  542642 logs.go:282] 1 containers: [780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c]
	I1028 11:02:30.946033  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:30.949613  542642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 11:02:30.949683  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 11:02:30.984758  542642 cri.go:89] found id: "d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769"
	I1028 11:02:30.984787  542642 cri.go:89] found id: ""
	I1028 11:02:30.984798  542642 logs.go:282] 1 containers: [d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769]
	I1028 11:02:30.984853  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:30.988141  542642 logs.go:123] Gathering logs for kubelet ...
	I1028 11:02:30.988159  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 11:02:31.046313  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.163425    1632 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-673472' and this object
	W1028 11:02:31.046492  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.163484    1632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	W1028 11:02:31.046617  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.164087    1632 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-673472' and this object
	W1028 11:02:31.046768  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.164135    1632 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	I1028 11:02:31.075634  542642 logs.go:123] Gathering logs for kube-apiserver [87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802] ...
	I1028 11:02:31.075678  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802"
	I1028 11:02:31.123351  542642 logs.go:123] Gathering logs for etcd [86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163] ...
	I1028 11:02:31.123394  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163"
	I1028 11:02:31.171642  542642 logs.go:123] Gathering logs for coredns [558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b] ...
	I1028 11:02:31.171674  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b"
	I1028 11:02:31.209068  542642 logs.go:123] Gathering logs for kube-scheduler [f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd] ...
	I1028 11:02:31.209103  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd"
	I1028 11:02:31.250806  542642 logs.go:123] Gathering logs for CRI-O ...
	I1028 11:02:31.250852  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 11:02:31.326790  542642 logs.go:123] Gathering logs for dmesg ...
	I1028 11:02:31.326829  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 11:02:31.340598  542642 logs.go:123] Gathering logs for describe nodes ...
	I1028 11:02:31.340634  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 11:02:31.442790  542642 logs.go:123] Gathering logs for kube-proxy [d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e] ...
	I1028 11:02:31.442823  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e"
	I1028 11:02:31.477657  542642 logs.go:123] Gathering logs for kube-controller-manager [780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c] ...
	I1028 11:02:31.477689  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c"
	I1028 11:02:31.536097  542642 logs.go:123] Gathering logs for kindnet [d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769] ...
	I1028 11:02:31.536142  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769"
	I1028 11:02:31.573364  542642 logs.go:123] Gathering logs for container status ...
	I1028 11:02:31.573393  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 11:02:31.617567  542642 out.go:358] Setting ErrFile to fd 2...
	I1028 11:02:31.617600  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 11:02:31.617671  542642 out.go:270] X Problems detected in kubelet:
	W1028 11:02:31.617684  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.163425    1632 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-673472' and this object
	W1028 11:02:31.617691  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.163484    1632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	W1028 11:02:31.617700  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.164087    1632 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-673472' and this object
	W1028 11:02:31.617707  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.164135    1632 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	I1028 11:02:31.617714  542642 out.go:358] Setting ErrFile to fd 2...
	I1028 11:02:31.617721  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:02:41.627921  542642 system_pods.go:59] 19 kube-system pods found
	I1028 11:02:41.627970  542642 system_pods.go:61] "amd-gpu-device-plugin-rbj2l" [06398681-9fc4-40ad-bf57-1dfbcab84b18] Running
	I1028 11:02:41.627977  542642 system_pods.go:61] "coredns-7c65d6cfc9-67wn8" [cdf89129-7554-4b64-996d-010412cebe81] Running
	I1028 11:02:41.627981  542642 system_pods.go:61] "csi-hostpath-attacher-0" [98fb08da-880f-4a9b-ac30-b1088dc77ed4] Running
	I1028 11:02:41.627985  542642 system_pods.go:61] "csi-hostpath-resizer-0" [3ff39992-a5fa-4c23-b4e8-447516f86aa3] Running
	I1028 11:02:41.627989  542642 system_pods.go:61] "csi-hostpathplugin-bbjgv" [8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6] Running
	I1028 11:02:41.627993  542642 system_pods.go:61] "etcd-addons-673472" [b971d450-f424-4d0c-9ed4-36d27855789f] Running
	I1028 11:02:41.627998  542642 system_pods.go:61] "kindnet-v9f97" [7ee1e13b-0b02-4fa1-91d6-3024c746da7e] Running
	I1028 11:02:41.628003  542642 system_pods.go:61] "kube-apiserver-addons-673472" [b1474126-fd31-4595-a223-36b97b89c20b] Running
	I1028 11:02:41.628008  542642 system_pods.go:61] "kube-controller-manager-addons-673472" [45b96afd-51c3-41e8-8471-ace3e96aa9ab] Running
	I1028 11:02:41.628013  542642 system_pods.go:61] "kube-ingress-dns-minikube" [5c010972-8925-448a-8dbc-f653c352a411] Running
	I1028 11:02:41.628018  542642 system_pods.go:61] "kube-proxy-bx7gb" [33118a0f-5e5a-491e-92f3-adfac41fe8a7] Running
	I1028 11:02:41.628026  542642 system_pods.go:61] "kube-scheduler-addons-673472" [498d5407-0a87-4251-9439-e27f43eed34c] Running
	I1028 11:02:41.628033  542642 system_pods.go:61] "metrics-server-84c5f94fbc-wbsls" [49ebcec4-5d24-4e53-87da-1cbbff8ac5e9] Running
	I1028 11:02:41.628038  542642 system_pods.go:61] "nvidia-device-plugin-daemonset-zktff" [1db498a0-7243-4eed-9b71-4a44ffadbf48] Running
	I1028 11:02:41.628112  542642 system_pods.go:61] "registry-66c9cd494c-lmvk5" [bf3603f0-8ec8-43cc-b75c-299459db5001] Running
	I1028 11:02:41.628119  542642 system_pods.go:61] "registry-proxy-24mvc" [94641dc2-0fe0-44ee-8265-3d276479b3ff] Running
	I1028 11:02:41.628125  542642 system_pods.go:61] "snapshot-controller-56fcc65765-75jc2" [7755b682-0c95-4200-98e2-291af6055537] Running
	I1028 11:02:41.628131  542642 system_pods.go:61] "snapshot-controller-56fcc65765-7sj9h" [47fe8932-32e4-4b34-95f8-e6c4abe22b0f] Running
	I1028 11:02:41.628137  542642 system_pods.go:61] "storage-provisioner" [859db836-484d-4ce9-bb84-ae9a067e2f0d] Running
	I1028 11:02:41.628146  542642 system_pods.go:74] duration metric: took 10.911622468s to wait for pod list to return data ...
	I1028 11:02:41.628158  542642 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:02:41.632312  542642 default_sa.go:45] found service account: "default"
	I1028 11:02:41.632335  542642 default_sa.go:55] duration metric: took 4.168274ms for default service account to be created ...
	I1028 11:02:41.632345  542642 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:02:41.642194  542642 system_pods.go:86] 19 kube-system pods found
	I1028 11:02:41.642226  542642 system_pods.go:89] "amd-gpu-device-plugin-rbj2l" [06398681-9fc4-40ad-bf57-1dfbcab84b18] Running
	I1028 11:02:41.642233  542642 system_pods.go:89] "coredns-7c65d6cfc9-67wn8" [cdf89129-7554-4b64-996d-010412cebe81] Running
	I1028 11:02:41.642237  542642 system_pods.go:89] "csi-hostpath-attacher-0" [98fb08da-880f-4a9b-ac30-b1088dc77ed4] Running
	I1028 11:02:41.642241  542642 system_pods.go:89] "csi-hostpath-resizer-0" [3ff39992-a5fa-4c23-b4e8-447516f86aa3] Running
	I1028 11:02:41.642245  542642 system_pods.go:89] "csi-hostpathplugin-bbjgv" [8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6] Running
	I1028 11:02:41.642248  542642 system_pods.go:89] "etcd-addons-673472" [b971d450-f424-4d0c-9ed4-36d27855789f] Running
	I1028 11:02:41.642252  542642 system_pods.go:89] "kindnet-v9f97" [7ee1e13b-0b02-4fa1-91d6-3024c746da7e] Running
	I1028 11:02:41.642255  542642 system_pods.go:89] "kube-apiserver-addons-673472" [b1474126-fd31-4595-a223-36b97b89c20b] Running
	I1028 11:02:41.642259  542642 system_pods.go:89] "kube-controller-manager-addons-673472" [45b96afd-51c3-41e8-8471-ace3e96aa9ab] Running
	I1028 11:02:41.642264  542642 system_pods.go:89] "kube-ingress-dns-minikube" [5c010972-8925-448a-8dbc-f653c352a411] Running
	I1028 11:02:41.642267  542642 system_pods.go:89] "kube-proxy-bx7gb" [33118a0f-5e5a-491e-92f3-adfac41fe8a7] Running
	I1028 11:02:41.642270  542642 system_pods.go:89] "kube-scheduler-addons-673472" [498d5407-0a87-4251-9439-e27f43eed34c] Running
	I1028 11:02:41.642274  542642 system_pods.go:89] "metrics-server-84c5f94fbc-wbsls" [49ebcec4-5d24-4e53-87da-1cbbff8ac5e9] Running
	I1028 11:02:41.642279  542642 system_pods.go:89] "nvidia-device-plugin-daemonset-zktff" [1db498a0-7243-4eed-9b71-4a44ffadbf48] Running
	I1028 11:02:41.642285  542642 system_pods.go:89] "registry-66c9cd494c-lmvk5" [bf3603f0-8ec8-43cc-b75c-299459db5001] Running
	I1028 11:02:41.642288  542642 system_pods.go:89] "registry-proxy-24mvc" [94641dc2-0fe0-44ee-8265-3d276479b3ff] Running
	I1028 11:02:41.642292  542642 system_pods.go:89] "snapshot-controller-56fcc65765-75jc2" [7755b682-0c95-4200-98e2-291af6055537] Running
	I1028 11:02:41.642297  542642 system_pods.go:89] "snapshot-controller-56fcc65765-7sj9h" [47fe8932-32e4-4b34-95f8-e6c4abe22b0f] Running
	I1028 11:02:41.642301  542642 system_pods.go:89] "storage-provisioner" [859db836-484d-4ce9-bb84-ae9a067e2f0d] Running
	I1028 11:02:41.642311  542642 system_pods.go:126] duration metric: took 9.960953ms to wait for k8s-apps to be running ...
	I1028 11:02:41.642322  542642 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:02:41.642371  542642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:02:41.654652  542642 system_svc.go:56] duration metric: took 12.318102ms WaitForService to wait for kubelet
	I1028 11:02:41.654684  542642 kubeadm.go:582] duration metric: took 2m37.72560234s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:02:41.654707  542642 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:02:41.657943  542642 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1028 11:02:41.657974  542642 node_conditions.go:123] node cpu capacity is 8
	I1028 11:02:41.657988  542642 node_conditions.go:105] duration metric: took 3.276114ms to run NodePressure ...
	I1028 11:02:41.658001  542642 start.go:241] waiting for startup goroutines ...
	I1028 11:02:41.658007  542642 start.go:246] waiting for cluster config update ...
	I1028 11:02:41.658024  542642 start.go:255] writing updated cluster config ...
	I1028 11:02:41.658294  542642 ssh_runner.go:195] Run: rm -f paused
	I1028 11:02:41.711431  542642 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:02:41.713734  542642 out.go:177] * Done! kubectl is now configured to use "addons-673472" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 11:03:58 addons-673472 crio[1040]: time="2024-10-28 11:03:58.968954047Z" level=info msg="Removed pod sandbox: fa302813e5d03999aeef28499ee23e4f571e291892efdb83d5db9e576dd939a0" id=c2345624-7bff-48ff-80c3-cadc272a41db name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 11:05:54 addons-673472 crio[1040]: time="2024-10-28 11:05:54.905141534Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-w7m2n/POD" id=2644d0e5-9ceb-405a-9bfe-ffeaff49b888 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 28 11:05:54 addons-673472 crio[1040]: time="2024-10-28 11:05:54.905241423Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 28 11:05:54 addons-673472 crio[1040]: time="2024-10-28 11:05:54.925488326Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-w7m2n Namespace:default ID:0e8afd0447a9f218221a96fa40f28d94d7c7932285ad41266b94138d9e7bd11e UID:ad33d090-0029-4262-b0a2-21017bd0b8c3 NetNS:/var/run/netns/70668373-dc4c-4073-b9e6-3ce3487c07fa Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 28 11:05:54 addons-673472 crio[1040]: time="2024-10-28 11:05:54.925528464Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-w7m2n to CNI network \"kindnet\" (type=ptp)"
	Oct 28 11:05:54 addons-673472 crio[1040]: time="2024-10-28 11:05:54.946946835Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-w7m2n Namespace:default ID:0e8afd0447a9f218221a96fa40f28d94d7c7932285ad41266b94138d9e7bd11e UID:ad33d090-0029-4262-b0a2-21017bd0b8c3 NetNS:/var/run/netns/70668373-dc4c-4073-b9e6-3ce3487c07fa Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 28 11:05:54 addons-673472 crio[1040]: time="2024-10-28 11:05:54.947133703Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-w7m2n for CNI network kindnet (type=ptp)"
	Oct 28 11:05:54 addons-673472 crio[1040]: time="2024-10-28 11:05:54.949932684Z" level=info msg="Ran pod sandbox 0e8afd0447a9f218221a96fa40f28d94d7c7932285ad41266b94138d9e7bd11e with infra container: default/hello-world-app-55bf9c44b4-w7m2n/POD" id=2644d0e5-9ceb-405a-9bfe-ffeaff49b888 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 28 11:05:54 addons-673472 crio[1040]: time="2024-10-28 11:05:54.952678203Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=125325f5-0794-4cfc-8c6f-19e097b64742 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 11:05:54 addons-673472 crio[1040]: time="2024-10-28 11:05:54.952945636Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=125325f5-0794-4cfc-8c6f-19e097b64742 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 11:05:54 addons-673472 crio[1040]: time="2024-10-28 11:05:54.953536174Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=449d4f61-7eed-4bd5-87b1-2dfc7b87b9ff name=/runtime.v1.ImageService/PullImage
	Oct 28 11:05:54 addons-673472 crio[1040]: time="2024-10-28 11:05:54.957978184Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 28 11:05:55 addons-673472 crio[1040]: time="2024-10-28 11:05:55.126628022Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 28 11:05:55 addons-673472 crio[1040]: time="2024-10-28 11:05:55.591015877Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=449d4f61-7eed-4bd5-87b1-2dfc7b87b9ff name=/runtime.v1.ImageService/PullImage
	Oct 28 11:05:55 addons-673472 crio[1040]: time="2024-10-28 11:05:55.591649127Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5f2d0845-f994-41b7-b23a-03f5f3e2c112 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 11:05:55 addons-673472 crio[1040]: time="2024-10-28 11:05:55.592260406Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5f2d0845-f994-41b7-b23a-03f5f3e2c112 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 11:05:55 addons-673472 crio[1040]: time="2024-10-28 11:05:55.594430352Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=11ae1002-5d32-4a87-bd4f-1bd9df8f9e21 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 11:05:55 addons-673472 crio[1040]: time="2024-10-28 11:05:55.595204776Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=11ae1002-5d32-4a87-bd4f-1bd9df8f9e21 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 11:05:55 addons-673472 crio[1040]: time="2024-10-28 11:05:55.596229405Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-w7m2n/hello-world-app" id=b561482f-f28a-4378-9e10-9962efe8ee62 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 28 11:05:55 addons-673472 crio[1040]: time="2024-10-28 11:05:55.596340313Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 28 11:05:55 addons-673472 crio[1040]: time="2024-10-28 11:05:55.613479361Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/846c5decd1522082bb45a26850899cc4ee6ad933c813056d22ee73579b854847/merged/etc/passwd: no such file or directory"
	Oct 28 11:05:55 addons-673472 crio[1040]: time="2024-10-28 11:05:55.613514744Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/846c5decd1522082bb45a26850899cc4ee6ad933c813056d22ee73579b854847/merged/etc/group: no such file or directory"
	Oct 28 11:05:55 addons-673472 crio[1040]: time="2024-10-28 11:05:55.654419273Z" level=info msg="Created container 6804af3c3a6bbdf32efd0f1b1cb47ddc63fad131b9e3a447511125a5d81df184: default/hello-world-app-55bf9c44b4-w7m2n/hello-world-app" id=b561482f-f28a-4378-9e10-9962efe8ee62 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 28 11:05:55 addons-673472 crio[1040]: time="2024-10-28 11:05:55.655087318Z" level=info msg="Starting container: 6804af3c3a6bbdf32efd0f1b1cb47ddc63fad131b9e3a447511125a5d81df184" id=7976bda7-a169-4c8e-bca5-cc355ec91770 name=/runtime.v1.RuntimeService/StartContainer
	Oct 28 11:05:55 addons-673472 crio[1040]: time="2024-10-28 11:05:55.661204109Z" level=info msg="Started container" PID=11512 containerID=6804af3c3a6bbdf32efd0f1b1cb47ddc63fad131b9e3a447511125a5d81df184 description=default/hello-world-app-55bf9c44b4-w7m2n/hello-world-app id=7976bda7-a169-4c8e-bca5-cc355ec91770 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0e8afd0447a9f218221a96fa40f28d94d7c7932285ad41266b94138d9e7bd11e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	6804af3c3a6bb       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   0e8afd0447a9f       hello-world-app-55bf9c44b4-w7m2n
	2d2542cd954b8       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago            Running             nginx                     0                   15a2f5ee37606       nginx
	8d1474a0966ca       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   66d9210ec05ce       busybox
	67c3976fe918a       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             4 minutes ago            Running             controller                0                   fbc804c52c72e       ingress-nginx-controller-5f85ff4588-bxh4n
	310fbe2dabb94       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago            Exited              patch                     0                   9d68dc1c83544       ingress-nginx-admission-patch-nd8pm
	2ab0e34f5bc41       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago            Exited              create                    0                   ba771f66563ad       ingress-nginx-admission-create-zstdd
	ef77ae889b5ef       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago            Running             metrics-server            0                   b06593215f93a       metrics-server-84c5f94fbc-wbsls
	236bb57d6d2d3       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago            Running             minikube-ingress-dns      0                   79381d8d1cf83       kube-ingress-dns-minikube
	558c3bfb5f08c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago            Running             coredns                   0                   9c922a06d5d22       coredns-7c65d6cfc9-67wn8
	9c5994b319418       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago            Running             storage-provisioner       0                   3d6fb9799962e       storage-provisioner
	d7dc377c1ec14       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                                             5 minutes ago            Running             kindnet-cni               0                   9d45c24995558       kindnet-v9f97
	d696cc719e6ea       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             5 minutes ago            Running             kube-proxy                0                   bd99683fb696e       kube-proxy-bx7gb
	780a49bac595f       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             6 minutes ago            Running             kube-controller-manager   0                   f10ba1e222682       kube-controller-manager-addons-673472
	f2f6d4fe59b6a       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             6 minutes ago            Running             kube-scheduler            0                   368281cf760e0       kube-scheduler-addons-673472
	87d6522eeaa67       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             6 minutes ago            Running             kube-apiserver            0                   bd97ae32ee7e1       kube-apiserver-addons-673472
	86f61a9b0f576       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             6 minutes ago            Running             etcd                      0                   978b7489a4a8f       etcd-addons-673472
	
	
	==> coredns [558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b] <==
	[INFO] 10.244.0.16:47489 - 21679 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080896s
	[INFO] 10.244.0.16:41179 - 42447 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.00568793s
	[INFO] 10.244.0.16:41179 - 42118 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.00575866s
	[INFO] 10.244.0.16:44447 - 14475 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005453238s
	[INFO] 10.244.0.16:44447 - 14671 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00577811s
	[INFO] 10.244.0.16:56242 - 58298 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.0054118s
	[INFO] 10.244.0.16:56242 - 58580 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005505645s
	[INFO] 10.244.0.16:57513 - 60492 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000085569s
	[INFO] 10.244.0.16:57513 - 60178 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000120947s
	[INFO] 10.244.0.20:38175 - 62566 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000188702s
	[INFO] 10.244.0.20:46196 - 29342 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000265612s
	[INFO] 10.244.0.20:50266 - 60722 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000132066s
	[INFO] 10.244.0.20:35668 - 54711 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147563s
	[INFO] 10.244.0.20:37026 - 49021 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138823s
	[INFO] 10.244.0.20:50953 - 9659 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000187735s
	[INFO] 10.244.0.20:45501 - 14279 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006226427s
	[INFO] 10.244.0.20:43076 - 40928 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006368315s
	[INFO] 10.244.0.20:40893 - 4514 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005918154s
	[INFO] 10.244.0.20:33221 - 26772 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005907911s
	[INFO] 10.244.0.20:41305 - 59757 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005512951s
	[INFO] 10.244.0.20:42124 - 58285 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005773338s
	[INFO] 10.244.0.20:52091 - 8140 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.004170715s
	[INFO] 10.244.0.20:34409 - 20143 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00421151s
	[INFO] 10.244.0.27:58609 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000235737s
	[INFO] 10.244.0.27:57680 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000124381s
	
	
	==> describe nodes <==
	Name:               addons-673472
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-673472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=addons-673472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T10_59_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-673472
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 10:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-673472
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:05:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:04:02 +0000   Mon, 28 Oct 2024 10:59:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:04:02 +0000   Mon, 28 Oct 2024 10:59:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:04:02 +0000   Mon, 28 Oct 2024 10:59:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:04:02 +0000   Mon, 28 Oct 2024 11:00:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-673472
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 1fb705493fdf4a4695128d58fcd0c875
	  System UUID:                17eec836-98df-4a92-abb5-eb6145cff181
	  Boot ID:                    a5d554e2-50f9-4cf6-aaf5-eeaeea5ccf20
	  Kernel Version:             5.15.0-1070-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  default                     hello-world-app-55bf9c44b4-w7m2n             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-bxh4n    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         5m47s
	  kube-system                 coredns-7c65d6cfc9-67wn8                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m53s
	  kube-system                 etcd-addons-673472                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m58s
	  kube-system                 kindnet-v9f97                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m53s
	  kube-system                 kube-apiserver-addons-673472                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-controller-manager-addons-673472        200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-proxy-bx7gb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-scheduler-addons-673472                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 metrics-server-84c5f94fbc-wbsls              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         5m48s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 5m47s                kube-proxy       
	  Normal   Starting                 6m3s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m3s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m3s (x8 over 6m3s)  kubelet          Node addons-673472 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m3s (x8 over 6m3s)  kubelet          Node addons-673472 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m3s (x7 over 6m3s)  kubelet          Node addons-673472 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m58s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m58s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m58s                kubelet          Node addons-673472 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m58s                kubelet          Node addons-673472 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m58s                kubelet          Node addons-673472 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m54s                node-controller  Node addons-673472 event: Registered Node addons-673472 in Controller
	  Normal   NodeReady                5m8s                 kubelet          Node addons-673472 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000655] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000642] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000794] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000671] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000683] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000639] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.688659] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024607] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.032061] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.028193] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.324014] kauditd_printk_skb: 44 callbacks suppressed
	[Oct28 11:03] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 4b 23 1b 71 d1 36 90 bc 5b b5 cc 08 00
	[  +1.023428] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 4b 23 1b 71 d1 36 90 bc 5b b5 cc 08 00
	[  +2.019803] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 4b 23 1b 71 d1 36 90 bc 5b b5 cc 08 00
	[  +4.219728] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 4b 23 1b 71 d1 36 90 bc 5b b5 cc 08 00
	[  +8.191369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 4b 23 1b 71 d1 36 90 bc 5b b5 cc 08 00
	[Oct28 11:04] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 4b 23 1b 71 d1 36 90 bc 5b b5 cc 08 00
	[ +34.045529] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 4b 23 1b 71 d1 36 90 bc 5b b5 cc 08 00
	
	
	==> etcd [86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163] <==
	{"level":"info","ts":"2024-10-28T11:00:07.308200Z","caller":"traceutil/trace.go:171","msg":"trace[932224814] transaction","detail":"{read_only:false; number_of_response:1; response_revision:432; }","duration":"100.728977ms","start":"2024-10-28T11:00:07.207448Z","end":"2024-10-28T11:00:07.308177Z","steps":["trace[932224814] 'process raft request'  (duration: 19.989254ms)","trace[932224814] 'compare'  (duration: 80.267981ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:00:07.606279Z","caller":"traceutil/trace.go:171","msg":"trace[1523903152] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"288.063987ms","start":"2024-10-28T11:00:07.318199Z","end":"2024-10-28T11:00:07.606263Z","steps":["trace[1523903152] 'process raft request'  (duration: 287.931241ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:00:07.616188Z","caller":"traceutil/trace.go:171","msg":"trace[1655175651] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"293.280924ms","start":"2024-10-28T11:00:07.322890Z","end":"2024-10-28T11:00:07.616171Z","steps":["trace[1655175651] 'process raft request'  (duration: 293.193728ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:00:07.616285Z","caller":"traceutil/trace.go:171","msg":"trace[1843020188] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"296.477771ms","start":"2024-10-28T11:00:07.319791Z","end":"2024-10-28T11:00:07.616269Z","steps":["trace[1843020188] 'process raft request'  (duration: 288.388544ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:00:07.616381Z","caller":"traceutil/trace.go:171","msg":"trace[694769862] transaction","detail":"{read_only:false; number_of_response:1; response_revision:438; }","duration":"296.388847ms","start":"2024-10-28T11:00:07.319982Z","end":"2024-10-28T11:00:07.616371Z","steps":["trace[694769862] 'process raft request'  (duration: 296.037145ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:01:26.582734Z","caller":"traceutil/trace.go:171","msg":"trace[1226832532] linearizableReadLoop","detail":"{readStateIndex:1177; appliedIndex:1176; }","duration":"155.4859ms","start":"2024-10-28T11:01:26.427218Z","end":"2024-10-28T11:01:26.582704Z","steps":["trace[1226832532] 'read index received'  (duration: 155.331163ms)","trace[1226832532] 'applied index is now lower than readState.Index'  (duration: 153.908µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:01:26.582775Z","caller":"traceutil/trace.go:171","msg":"trace[1334436647] transaction","detail":"{read_only:false; response_revision:1145; number_of_response:1; }","duration":"161.235746ms","start":"2024-10-28T11:01:26.421518Z","end":"2024-10-28T11:01:26.582753Z","steps":["trace[1334436647] 'process raft request'  (duration: 161.031275ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:01:26.582925Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.68281ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-673472\" ","response":"range_response_count:1 size:5985"}
	{"level":"info","ts":"2024-10-28T11:01:26.582970Z","caller":"traceutil/trace.go:171","msg":"trace[1065955371] range","detail":"{range_begin:/registry/minions/addons-673472; range_end:; response_count:1; response_revision:1145; }","duration":"155.738798ms","start":"2024-10-28T11:01:26.427214Z","end":"2024-10-28T11:01:26.582952Z","steps":["trace[1065955371] 'agreement among raft nodes before linearized reading'  (duration: 155.58609ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:01:26.593230Z","caller":"traceutil/trace.go:171","msg":"trace[184068508] transaction","detail":"{read_only:false; response_revision:1146; number_of_response:1; }","duration":"165.853483ms","start":"2024-10-28T11:01:26.427356Z","end":"2024-10-28T11:01:26.593209Z","steps":["trace[184068508] 'process raft request'  (duration: 165.74865ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:01:26.805948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.281813ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032861497871490 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/gcp-auth/gcp-auth\" mod_revision:771 > success:<request_put:<key:\"/registry/services/endpoints/gcp-auth/gcp-auth\" value_size:499 >> failure:<request_range:<key:\"/registry/services/endpoints/gcp-auth/gcp-auth\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-28T11:01:26.806178Z","caller":"traceutil/trace.go:171","msg":"trace[593227183] linearizableReadLoop","detail":"{readStateIndex:1181; appliedIndex:1178; }","duration":"172.42603ms","start":"2024-10-28T11:01:26.633742Z","end":"2024-10-28T11:01:26.806168Z","steps":["trace[593227183] 'read index received'  (duration: 20.88635ms)","trace[593227183] 'applied index is now lower than readState.Index'  (duration: 151.539015ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:01:26.806181Z","caller":"traceutil/trace.go:171","msg":"trace[1484974083] transaction","detail":"{read_only:false; response_revision:1147; number_of_response:1; }","duration":"219.073952ms","start":"2024-10-28T11:01:26.587090Z","end":"2024-10-28T11:01:26.806164Z","steps":["trace[1484974083] 'process raft request'  (duration: 67.4872ms)","trace[1484974083] 'compare'  (duration: 151.158811ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:01:26.806204Z","caller":"traceutil/trace.go:171","msg":"trace[771625790] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"219.046453ms","start":"2024-10-28T11:01:26.587136Z","end":"2024-10-28T11:01:26.806182Z","steps":["trace[771625790] 'process raft request'  (duration: 218.913621ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:01:26.806336Z","caller":"traceutil/trace.go:171","msg":"trace[1779034871] transaction","detail":"{read_only:false; response_revision:1149; number_of_response:1; }","duration":"218.016967ms","start":"2024-10-28T11:01:26.588311Z","end":"2024-10-28T11:01:26.806328Z","steps":["trace[1779034871] 'process raft request'  (duration: 217.806219ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:01:26.806389Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.647938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:01:26.806432Z","caller":"traceutil/trace.go:171","msg":"trace[1662475941] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1149; }","duration":"172.698955ms","start":"2024-10-28T11:01:26.633724Z","end":"2024-10-28T11:01:26.806423Z","steps":["trace[1662475941] 'agreement among raft nodes before linearized reading'  (duration: 172.626372ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:01:32.506091Z","caller":"traceutil/trace.go:171","msg":"trace[94474605] linearizableReadLoop","detail":"{readStateIndex:1210; appliedIndex:1209; }","duration":"129.568034ms","start":"2024-10-28T11:01:32.376498Z","end":"2024-10-28T11:01:32.506066Z","steps":["trace[94474605] 'read index received'  (duration: 61.82986ms)","trace[94474605] 'applied index is now lower than readState.Index'  (duration: 67.737349ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:01:32.506173Z","caller":"traceutil/trace.go:171","msg":"trace[17385449] transaction","detail":"{read_only:false; response_revision:1177; number_of_response:1; }","duration":"129.836214ms","start":"2024-10-28T11:01:32.376281Z","end":"2024-10-28T11:01:32.506117Z","steps":["trace[17385449] 'process raft request'  (duration: 62.09369ms)","trace[17385449] 'compare'  (duration: 67.584312ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T11:01:32.506275Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.750774ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-673472\" ","response":"range_response_count:1 size:5985"}
	{"level":"info","ts":"2024-10-28T11:01:32.506309Z","caller":"traceutil/trace.go:171","msg":"trace[936494536] range","detail":"{range_begin:/registry/minions/addons-673472; range_end:; response_count:1; response_revision:1177; }","duration":"129.806601ms","start":"2024-10-28T11:01:32.376493Z","end":"2024-10-28T11:01:32.506300Z","steps":["trace[936494536] 'agreement among raft nodes before linearized reading'  (duration: 129.670029ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:01:37.979190Z","caller":"traceutil/trace.go:171","msg":"trace[11625262] transaction","detail":"{read_only:false; response_revision:1205; number_of_response:1; }","duration":"127.539135ms","start":"2024-10-28T11:01:37.851633Z","end":"2024-10-28T11:01:37.979173Z","steps":["trace[11625262] 'process raft request'  (duration: 62.581026ms)","trace[11625262] 'compare'  (duration: 64.73073ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:01:37.979177Z","caller":"traceutil/trace.go:171","msg":"trace[1437064814] linearizableReadLoop","detail":"{readStateIndex:1242; appliedIndex:1241; }","duration":"125.1034ms","start":"2024-10-28T11:01:37.854046Z","end":"2024-10-28T11:01:37.979149Z","steps":["trace[1437064814] 'read index received'  (duration: 60.135963ms)","trace[1437064814] 'applied index is now lower than readState.Index'  (duration: 64.966272ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T11:01:37.979346Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.283733ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-673472\" ","response":"range_response_count:1 size:5985"}
	{"level":"info","ts":"2024-10-28T11:01:37.979381Z","caller":"traceutil/trace.go:171","msg":"trace[312256287] range","detail":"{range_begin:/registry/minions/addons-673472; range_end:; response_count:1; response_revision:1205; }","duration":"125.336366ms","start":"2024-10-28T11:01:37.854035Z","end":"2024-10-28T11:01:37.979371Z","steps":["trace[312256287] 'agreement among raft nodes before linearized reading'  (duration: 125.165792ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:05:56 up  2:48,  0 users,  load average: 0.86, 10.76, 56.65
	Linux addons-673472 5.15.0-1070-gcp #78~20.04.1-Ubuntu SMP Wed Oct 9 22:05:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769] <==
	I1028 11:03:47.906859       1 main.go:300] handling current node
	I1028 11:03:57.907162       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:03:57.907200       1 main.go:300] handling current node
	I1028 11:04:07.907244       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:04:07.907282       1 main.go:300] handling current node
	I1028 11:04:17.912844       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:04:17.912888       1 main.go:300] handling current node
	I1028 11:04:27.914060       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:04:27.914103       1 main.go:300] handling current node
	I1028 11:04:37.908848       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:04:37.908896       1 main.go:300] handling current node
	I1028 11:04:47.913630       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:04:47.913678       1 main.go:300] handling current node
	I1028 11:04:57.915799       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:04:57.915838       1 main.go:300] handling current node
	I1028 11:05:07.907375       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:05:07.907423       1 main.go:300] handling current node
	I1028 11:05:17.912836       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:05:17.912875       1 main.go:300] handling current node
	I1028 11:05:27.915993       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:05:27.916048       1 main.go:300] handling current node
	I1028 11:05:37.907051       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:05:37.907175       1 main.go:300] handling current node
	I1028 11:05:47.913839       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:05:47.913886       1 main.go:300] handling current node
	
	
	==> kube-apiserver [87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 11:02:13.857502       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1028 11:02:52.435604       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52148: use of closed network connection
	E1028 11:02:52.605932       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52174: use of closed network connection
	I1028 11:03:01.639658       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.129.166"}
	I1028 11:03:30.265863       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E1028 11:03:31.212447       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	W1028 11:03:31.321002       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1028 11:03:35.765401       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1028 11:03:35.958723       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.26.74"}
	I1028 11:03:38.907880       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1028 11:03:55.695720       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:03:55.695782       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 11:03:55.728888       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:03:55.729009       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 11:03:55.818778       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:03:55.818937       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 11:03:55.829159       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:03:55.829713       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1028 11:03:56.819964       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1028 11:03:56.834810       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1028 11:03:56.905657       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1028 11:05:54.814712       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.236.91"}
	
	
	==> kube-controller-manager [780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c] <==
	W1028 11:04:37.747853       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:04:37.747900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:04:38.355143       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:04:38.355188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:04:51.726415       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:04:51.726459       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:05:06.843233       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:05:06.843279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:05:15.803885       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:05:15.803943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:05:18.252427       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:05:18.252473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:05:40.777101       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:05:40.777157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:05:51.330199       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:05:51.330246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1028 11:05:54.568796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="15.890614ms"
	I1028 11:05:54.575910       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="7.056776ms"
	I1028 11:05:54.575990       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="37.682µs"
	I1028 11:05:54.576033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="18.118µs"
	I1028 11:05:54.582015       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="73.516µs"
	W1028 11:05:55.054436       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:05:55.054490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1028 11:05:56.198027       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.247993ms"
	I1028 11:05:56.198213       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="55.598µs"
	
	
	==> kube-proxy [d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e] <==
	I1028 11:00:06.428936       1 server_linux.go:66] "Using iptables proxy"
	I1028 11:00:07.908221       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1028 11:00:07.908479       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:00:08.508253       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1028 11:00:08.508356       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:00:08.514904       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:00:08.517596       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:00:08.517984       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:00:08.520203       1 config.go:199] "Starting service config controller"
	I1028 11:00:08.521561       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:00:08.520704       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:00:08.521683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:00:08.521278       1 config.go:328] "Starting node config controller"
	I1028 11:00:08.521742       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:00:08.622392       1 shared_informer.go:320] Caches are synced for node config
	I1028 11:00:08.622394       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 11:00:08.622401       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd] <==
	W1028 10:59:56.227184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1028 10:59:56.227218       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 10:59:56.227230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1028 10:59:56.227324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.040202       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 10:59:57.040247       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.070694       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 10:59:57.070752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.073947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 10:59:57.073989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.122466       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 10:59:57.122513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.250605       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 10:59:57.250651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.255021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 10:59:57.255063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.297855       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 10:59:57.297912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.342782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 10:59:57.342833       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.398440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 10:59:57.398487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.605853       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 10:59:57.605895       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1028 10:59:59.624074       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 11:05:48 addons-673472 kubelet[1632]: E1028 11:05:48.945923    1632 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113548945544818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599399,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:05:48 addons-673472 kubelet[1632]: E1028 11:05:48.945972    1632 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113548945544818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599399,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: E1028 11:05:54.570436    1632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6" containerName="csi-external-health-monitor-controller"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: E1028 11:05:54.570485    1632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6" containerName="hostpath"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: E1028 11:05:54.570494    1632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47fe8932-32e4-4b34-95f8-e6c4abe22b0f" containerName="volume-snapshot-controller"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: E1028 11:05:54.570503    1632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6" containerName="liveness-probe"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: E1028 11:05:54.570512    1632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6" containerName="node-driver-registrar"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: E1028 11:05:54.570520    1632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ff39992-a5fa-4c23-b4e8-447516f86aa3" containerName="csi-resizer"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: E1028 11:05:54.570531    1632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eaa9702d-ee08-4b6d-b889-be03cf65a689" containerName="task-pv-container"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: E1028 11:05:54.570540    1632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6" containerName="csi-provisioner"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: E1028 11:05:54.570549    1632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6" containerName="csi-snapshotter"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: E1028 11:05:54.570560    1632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7755b682-0c95-4200-98e2-291af6055537" containerName="volume-snapshot-controller"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: E1028 11:05:54.570570    1632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98fb08da-880f-4a9b-ac30-b1088dc77ed4" containerName="csi-attacher"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: I1028 11:05:54.570618    1632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6" containerName="csi-external-health-monitor-controller"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: I1028 11:05:54.570629    1632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6" containerName="hostpath"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: I1028 11:05:54.570636    1632 memory_manager.go:354] "RemoveStaleState removing state" podUID="47fe8932-32e4-4b34-95f8-e6c4abe22b0f" containerName="volume-snapshot-controller"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: I1028 11:05:54.570643    1632 memory_manager.go:354] "RemoveStaleState removing state" podUID="98fb08da-880f-4a9b-ac30-b1088dc77ed4" containerName="csi-attacher"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: I1028 11:05:54.570651    1632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6" containerName="node-driver-registrar"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: I1028 11:05:54.570659    1632 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa9702d-ee08-4b6d-b889-be03cf65a689" containerName="task-pv-container"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: I1028 11:05:54.570666    1632 memory_manager.go:354] "RemoveStaleState removing state" podUID="7755b682-0c95-4200-98e2-291af6055537" containerName="volume-snapshot-controller"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: I1028 11:05:54.570676    1632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6" containerName="csi-provisioner"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: I1028 11:05:54.570683    1632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6" containerName="csi-snapshotter"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: I1028 11:05:54.570690    1632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6" containerName="liveness-probe"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: I1028 11:05:54.570697    1632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ff39992-a5fa-4c23-b4e8-447516f86aa3" containerName="csi-resizer"
	Oct 28 11:05:54 addons-673472 kubelet[1632]: I1028 11:05:54.727885    1632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp5h7\" (UniqueName: \"kubernetes.io/projected/ad33d090-0029-4262-b0a2-21017bd0b8c3-kube-api-access-tp5h7\") pod \"hello-world-app-55bf9c44b4-w7m2n\" (UID: \"ad33d090-0029-4262-b0a2-21017bd0b8c3\") " pod="default/hello-world-app-55bf9c44b4-w7m2n"
	
	
	==> storage-provisioner [9c5994b319418ea2b9da3599b93024a16ec2b2a2060f1eb06019e311d4b3e36a] <==
	I1028 11:00:49.120426       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 11:00:49.128116       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 11:00:49.128193       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 11:00:49.138757       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 11:00:49.138904       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fbaeb622-2a3a-47c5-8672-b3e4cec045b1", APIVersion:"v1", ResourceVersion:"931", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-673472_d714ab4c-d27b-4db4-80f2-b1df72977db8 became leader
	I1028 11:00:49.138919       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-673472_d714ab4c-d27b-4db4-80f2-b1df72977db8!
	I1028 11:00:49.240010       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-673472_d714ab4c-d27b-4db4-80f2-b1df72977db8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-673472 -n addons-673472
helpers_test.go:261: (dbg) Run:  kubectl --context addons-673472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-zstdd ingress-nginx-admission-patch-nd8pm
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-673472 describe pod ingress-nginx-admission-create-zstdd ingress-nginx-admission-patch-nd8pm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-673472 describe pod ingress-nginx-admission-create-zstdd ingress-nginx-admission-patch-nd8pm: exit status 1 (61.010605ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zstdd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-nd8pm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-673472 describe pod ingress-nginx-admission-create-zstdd ingress-nginx-admission-patch-nd8pm: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-673472 addons disable ingress-dns --alsologtostderr -v=1: (1.045904544s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-673472 addons disable ingress --alsologtostderr -v=1: (7.65520004s)
--- FAIL: TestAddons/parallel/Ingress (150.41s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (361.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.357472ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-wbsls" [49ebcec4-5d24-4e53-87da-1cbbff8ac5e9] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004040838s
addons_test.go:402: (dbg) Run:  kubectl --context addons-673472 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-673472 top pods -n kube-system: exit status 1 (70.844742ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-67wn8, age: 3m18.631386863s

                                                
                                                
** /stderr **
I1028 11:03:21.634111  541347 retry.go:31] will retry after 3.876954351s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-673472 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-673472 top pods -n kube-system: exit status 1 (67.252525ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-67wn8, age: 3m22.576214269s

                                                
                                                
** /stderr **
I1028 11:03:25.579362  541347 retry.go:31] will retry after 6.706997799s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-673472 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-673472 top pods -n kube-system: exit status 1 (68.810728ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-67wn8, age: 3m29.353225563s

                                                
                                                
** /stderr **
I1028 11:03:32.356177  541347 retry.go:31] will retry after 7.290569021s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-673472 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-673472 top pods -n kube-system: exit status 1 (67.70042ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-67wn8, age: 3m36.711736621s

                                                
                                                
** /stderr **
I1028 11:03:39.714821  541347 retry.go:31] will retry after 14.666596561s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-673472 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-673472 top pods -n kube-system: exit status 1 (74.77701ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-67wn8, age: 3m51.454242444s

                                                
                                                
** /stderr **
I1028 11:03:54.456769  541347 retry.go:31] will retry after 13.188203948s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-673472 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-673472 top pods -n kube-system: exit status 1 (69.869342ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-67wn8, age: 4m4.712840433s

                                                
                                                
** /stderr **
I1028 11:04:07.716034  541347 retry.go:31] will retry after 17.134984364s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-673472 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-673472 top pods -n kube-system: exit status 1 (66.623831ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-67wn8, age: 4m21.915637159s

                                                
                                                
** /stderr **
I1028 11:04:24.918684  541347 retry.go:31] will retry after 22.7965137s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-673472 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-673472 top pods -n kube-system: exit status 1 (65.607125ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-67wn8, age: 4m44.77969228s

                                                
                                                
** /stderr **
I1028 11:04:47.782648  541347 retry.go:31] will retry after 1m2.370874775s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-673472 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-673472 top pods -n kube-system: exit status 1 (69.08597ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-67wn8, age: 5m47.22035054s

                                                
                                                
** /stderr **
I1028 11:05:50.223563  541347 retry.go:31] will retry after 44.941213954s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-673472 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-673472 top pods -n kube-system: exit status 1 (68.686989ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-67wn8, age: 6m32.23079587s

                                                
                                                
** /stderr **
I1028 11:06:35.233750  541347 retry.go:31] will retry after 1m16.617768629s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-673472 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-673472 top pods -n kube-system: exit status 1 (68.385882ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-67wn8, age: 7m48.922238087s

                                                
                                                
** /stderr **
I1028 11:07:51.925462  541347 retry.go:31] will retry after 1m23.970135259s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-673472 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-673472 top pods -n kube-system: exit status 1 (67.606772ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-67wn8, age: 9m12.961187579s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-673472
helpers_test.go:235: (dbg) docker inspect addons-673472:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e8b924fc64073dc02f22bcf1007e26515b922633e83268091dafc650be83a735",
	        "Created": "2024-10-28T10:59:43.022756242Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 543375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-28T10:59:43.16228893Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:05bcd996665116a573f1bc98d7e2b0a5da287feef26d621bbd294f87ee72c630",
	        "ResolvConfPath": "/var/lib/docker/containers/e8b924fc64073dc02f22bcf1007e26515b922633e83268091dafc650be83a735/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e8b924fc64073dc02f22bcf1007e26515b922633e83268091dafc650be83a735/hostname",
	        "HostsPath": "/var/lib/docker/containers/e8b924fc64073dc02f22bcf1007e26515b922633e83268091dafc650be83a735/hosts",
	        "LogPath": "/var/lib/docker/containers/e8b924fc64073dc02f22bcf1007e26515b922633e83268091dafc650be83a735/e8b924fc64073dc02f22bcf1007e26515b922633e83268091dafc650be83a735-json.log",
	        "Name": "/addons-673472",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-673472:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-673472",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/aad7895549e0b7ea085ca2af3f11c087fac6bf570ad2dc4bd73feee2d5f93b18-init/diff:/var/lib/docker/overlay2/d473489c45702c25b9c588a4584ae1c4861c78e651ffd702dd9d50699009da5c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aad7895549e0b7ea085ca2af3f11c087fac6bf570ad2dc4bd73feee2d5f93b18/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aad7895549e0b7ea085ca2af3f11c087fac6bf570ad2dc4bd73feee2d5f93b18/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aad7895549e0b7ea085ca2af3f11c087fac6bf570ad2dc4bd73feee2d5f93b18/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-673472",
	                "Source": "/var/lib/docker/volumes/addons-673472/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-673472",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-673472",
	                "name.minikube.sigs.k8s.io": "addons-673472",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "90a6408866ae52c01dd153915206bcb1bf2a1623cf9e2dcfbd16c2fec6a503ea",
	            "SandboxKey": "/var/run/docker/netns/90a6408866ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-673472": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "60819e8d84bc026fde010beabfc4e4d1bdc6f9809a10a5a0a7b142a5bfb4baef",
	                    "EndpointID": "089bb0d5dfdce7b823abf8a94ca5ede0fd01eb42485fbeb601f7c10fec1b43d3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-673472",
	                        "e8b924fc6407"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-673472 -n addons-673472
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-673472 logs -n 25: (1.161946755s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-026471                                                                   | download-docker-026471 | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC | 28 Oct 24 10:59 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-985535   | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC |                     |
	|         | binary-mirror-985535                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40257                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-985535                                                                     | binary-mirror-985535   | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC | 28 Oct 24 10:59 UTC |
	| addons  | enable dashboard -p                                                                         | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC |                     |
	|         | addons-673472                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC |                     |
	|         | addons-673472                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-673472 --wait=true                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC | 28 Oct 24 11:02 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:02 UTC | 28 Oct 24 11:02 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:02 UTC | 28 Oct 24 11:03 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | -p addons-673472                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-673472 addons                                                                        | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-673472 addons                                                                        | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-673472 ssh cat                                                                       | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | /opt/local-path-provisioner/pvc-b5123f9c-13e2-4f3b-9621-6a638e949257_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-673472 ip                                                                            | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-673472 addons                                                                        | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-673472 ssh curl -s                                                                   | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-673472 addons                                                                        | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-673472 addons                                                                        | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:04 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-673472 ip                                                                            | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:05 UTC | 28 Oct 24 11:05 UTC |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:05 UTC | 28 Oct 24 11:05 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-673472 addons disable                                                                | addons-673472          | jenkins | v1.34.0 | 28 Oct 24 11:05 UTC | 28 Oct 24 11:06 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 10:59:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 10:59:20.754511  542642 out.go:345] Setting OutFile to fd 1 ...
	I1028 10:59:20.754981  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:59:20.755000  542642 out.go:358] Setting ErrFile to fd 2...
	I1028 10:59:20.755013  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:59:20.755499  542642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
	I1028 10:59:20.756452  542642 out.go:352] Setting JSON to false
	I1028 10:59:20.757469  542642 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9705,"bootTime":1730103456,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 10:59:20.757580  542642 start.go:139] virtualization: kvm guest
	I1028 10:59:20.759710  542642 out.go:177] * [addons-673472] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 10:59:20.761657  542642 notify.go:220] Checking for updates...
	I1028 10:59:20.761693  542642 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 10:59:20.763215  542642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 10:59:20.764504  542642 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-533928/kubeconfig
	I1028 10:59:20.765866  542642 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-533928/.minikube
	I1028 10:59:20.767378  542642 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 10:59:20.768819  542642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 10:59:20.770226  542642 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:59:20.793533  542642 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 10:59:20.793626  542642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:59:20.840792  542642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-28 10:59:20.831923049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 10:59:20.840909  542642 docker.go:318] overlay module found
	I1028 10:59:20.842859  542642 out.go:177] * Using the docker driver based on user configuration
	I1028 10:59:20.844253  542642 start.go:297] selected driver: docker
	I1028 10:59:20.844275  542642 start.go:901] validating driver "docker" against <nil>
	I1028 10:59:20.844289  542642 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 10:59:20.845157  542642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:59:20.892051  542642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-28 10:59:20.882465964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 10:59:20.892234  542642 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 10:59:20.892485  542642 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 10:59:20.894462  542642 out.go:177] * Using Docker driver with root privileges
	I1028 10:59:20.895759  542642 cni.go:84] Creating CNI manager for ""
	I1028 10:59:20.895851  542642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 10:59:20.895867  542642 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 10:59:20.895958  542642 start.go:340] cluster config:
	{Name:addons-673472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-673472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:59:20.897432  542642 out.go:177] * Starting "addons-673472" primary control-plane node in "addons-673472" cluster
	I1028 10:59:20.898778  542642 cache.go:121] Beginning downloading kic base image for docker with crio
	I1028 10:59:20.900132  542642 out.go:177] * Pulling base image v0.0.45-1729876044-19868 ...
	I1028 10:59:20.901490  542642 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 10:59:20.901546  542642 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-533928/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 10:59:20.901561  542642 cache.go:56] Caching tarball of preloaded images
	I1028 10:59:20.901597  542642 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1028 10:59:20.901673  542642 preload.go:172] Found /home/jenkins/minikube-integration/19876-533928/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 10:59:20.901689  542642 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 10:59:20.902131  542642 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/config.json ...
	I1028 10:59:20.902161  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/config.json: {Name:mk4756f90b022f398c58cfd7f5b361a437b707b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:20.917908  542642 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e to local cache
	I1028 10:59:20.918047  542642 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local cache directory
	I1028 10:59:20.918065  542642 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local cache directory, skipping pull
	I1028 10:59:20.918070  542642 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in cache, skipping pull
	I1028 10:59:20.918078  542642 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e as a tarball
	I1028 10:59:20.918085  542642 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e from local cache
	I1028 10:59:33.015225  542642 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e from cached tarball
	I1028 10:59:33.015277  542642 cache.go:194] Successfully downloaded all kic artifacts
	I1028 10:59:33.015322  542642 start.go:360] acquireMachinesLock for addons-673472: {Name:mkc162c1f445a325af5ddcd3a485171b8916426b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 10:59:33.015518  542642 start.go:364] duration metric: took 127.084µs to acquireMachinesLock for "addons-673472"
	I1028 10:59:33.015572  542642 start.go:93] Provisioning new machine with config: &{Name:addons-673472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-673472 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 10:59:33.015714  542642 start.go:125] createHost starting for "" (driver="docker")
	I1028 10:59:33.018654  542642 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1028 10:59:33.018936  542642 start.go:159] libmachine.API.Create for "addons-673472" (driver="docker")
	I1028 10:59:33.018979  542642 client.go:168] LocalClient.Create starting
	I1028 10:59:33.019097  542642 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca.pem
	I1028 10:59:33.186527  542642 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/cert.pem
	I1028 10:59:33.313958  542642 cli_runner.go:164] Run: docker network inspect addons-673472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1028 10:59:33.330343  542642 cli_runner.go:211] docker network inspect addons-673472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1028 10:59:33.330422  542642 network_create.go:284] running [docker network inspect addons-673472] to gather additional debugging logs...
	I1028 10:59:33.330443  542642 cli_runner.go:164] Run: docker network inspect addons-673472
	W1028 10:59:33.346911  542642 cli_runner.go:211] docker network inspect addons-673472 returned with exit code 1
	I1028 10:59:33.346949  542642 network_create.go:287] error running [docker network inspect addons-673472]: docker network inspect addons-673472: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-673472 not found
	I1028 10:59:33.346963  542642 network_create.go:289] output of [docker network inspect addons-673472]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-673472 not found
	
	** /stderr **
	I1028 10:59:33.347126  542642 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1028 10:59:33.364522  542642 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c81d10}
	I1028 10:59:33.364576  542642 network_create.go:124] attempt to create docker network addons-673472 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1028 10:59:33.364624  542642 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-673472 addons-673472
	I1028 10:59:33.431919  542642 network_create.go:108] docker network addons-673472 192.168.49.0/24 created
	I1028 10:59:33.431967  542642 kic.go:121] calculated static IP "192.168.49.2" for the "addons-673472" container
	I1028 10:59:33.432037  542642 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1028 10:59:33.447921  542642 cli_runner.go:164] Run: docker volume create addons-673472 --label name.minikube.sigs.k8s.io=addons-673472 --label created_by.minikube.sigs.k8s.io=true
	I1028 10:59:33.466308  542642 oci.go:103] Successfully created a docker volume addons-673472
	I1028 10:59:33.466394  542642 cli_runner.go:164] Run: docker run --rm --name addons-673472-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-673472 --entrypoint /usr/bin/test -v addons-673472:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -d /var/lib
	I1028 10:59:38.431407  542642 cli_runner.go:217] Completed: docker run --rm --name addons-673472-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-673472 --entrypoint /usr/bin/test -v addons-673472:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -d /var/lib: (4.964952883s)
	I1028 10:59:38.431451  542642 oci.go:107] Successfully prepared a docker volume addons-673472
	I1028 10:59:38.431498  542642 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 10:59:38.431554  542642 kic.go:194] Starting extracting preloaded images to volume ...
	I1028 10:59:38.431642  542642 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19876-533928/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-673472:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -I lz4 -xf /preloaded.tar -C /extractDir
	I1028 10:59:42.953597  542642 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19876-533928/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-673472:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -I lz4 -xf /preloaded.tar -C /extractDir: (4.521892283s)
	I1028 10:59:42.953640  542642 kic.go:203] duration metric: took 4.522081875s to extract preloaded images to volume ...
	W1028 10:59:42.953810  542642 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1028 10:59:42.953932  542642 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1028 10:59:43.007276  542642 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-673472 --name addons-673472 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-673472 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-673472 --network addons-673472 --ip 192.168.49.2 --volume addons-673472:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e
	I1028 10:59:43.338724  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Running}}
	I1028 10:59:43.356046  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 10:59:43.375169  542642 cli_runner.go:164] Run: docker exec addons-673472 stat /var/lib/dpkg/alternatives/iptables
	I1028 10:59:43.421376  542642 oci.go:144] the created container "addons-673472" has a running status.
	I1028 10:59:43.421439  542642 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa...
	I1028 10:59:43.483588  542642 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1028 10:59:43.506488  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 10:59:43.524511  542642 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1028 10:59:43.524538  542642 kic_runner.go:114] Args: [docker exec --privileged addons-673472 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1028 10:59:43.569953  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 10:59:43.588196  542642 machine.go:93] provisionDockerMachine start ...
	I1028 10:59:43.588337  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:43.607212  542642 main.go:141] libmachine: Using SSH client type: native
	I1028 10:59:43.607468  542642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 10:59:43.607480  542642 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 10:59:43.608322  542642 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42546->127.0.0.1:32768: read: connection reset by peer
	I1028 10:59:46.732613  542642 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-673472
	
	I1028 10:59:46.732653  542642 ubuntu.go:169] provisioning hostname "addons-673472"
	I1028 10:59:46.732722  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:46.749834  542642 main.go:141] libmachine: Using SSH client type: native
	I1028 10:59:46.750061  542642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 10:59:46.750078  542642 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-673472 && echo "addons-673472" | sudo tee /etc/hostname
	I1028 10:59:46.880475  542642 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-673472
	
	I1028 10:59:46.880595  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:46.897706  542642 main.go:141] libmachine: Using SSH client type: native
	I1028 10:59:46.897934  542642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 10:59:46.897960  542642 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-673472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-673472/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-673472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 10:59:47.017226  542642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 10:59:47.017263  542642 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19876-533928/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-533928/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-533928/.minikube}
	I1028 10:59:47.017304  542642 ubuntu.go:177] setting up certificates
	I1028 10:59:47.017323  542642 provision.go:84] configureAuth start
	I1028 10:59:47.017383  542642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-673472
	I1028 10:59:47.034526  542642 provision.go:143] copyHostCerts
	I1028 10:59:47.034628  542642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-533928/.minikube/ca.pem (1078 bytes)
	I1028 10:59:47.034799  542642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-533928/.minikube/cert.pem (1123 bytes)
	I1028 10:59:47.034871  542642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-533928/.minikube/key.pem (1675 bytes)
	I1028 10:59:47.034926  542642 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-533928/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca-key.pem org=jenkins.addons-673472 san=[127.0.0.1 192.168.49.2 addons-673472 localhost minikube]
	I1028 10:59:47.320208  542642 provision.go:177] copyRemoteCerts
	I1028 10:59:47.320278  542642 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 10:59:47.320318  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:47.337824  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 10:59:47.425816  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 10:59:47.449530  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 10:59:47.471356  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 10:59:47.492773  542642 provision.go:87] duration metric: took 475.429444ms to configureAuth
	I1028 10:59:47.492806  542642 ubuntu.go:193] setting minikube options for container-runtime
	I1028 10:59:47.492993  542642 config.go:182] Loaded profile config "addons-673472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 10:59:47.493127  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:47.509958  542642 main.go:141] libmachine: Using SSH client type: native
	I1028 10:59:47.510147  542642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1028 10:59:47.510164  542642 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 10:59:47.723865  542642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 10:59:47.723905  542642 machine.go:96] duration metric: took 4.135668383s to provisionDockerMachine
	I1028 10:59:47.723923  542642 client.go:171] duration metric: took 14.70493314s to LocalClient.Create
	I1028 10:59:47.723952  542642 start.go:167] duration metric: took 14.705016732s to libmachine.API.Create "addons-673472"
	I1028 10:59:47.723965  542642 start.go:293] postStartSetup for "addons-673472" (driver="docker")
	I1028 10:59:47.723981  542642 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 10:59:47.724056  542642 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 10:59:47.724109  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:47.740986  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 10:59:47.830267  542642 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 10:59:47.833684  542642 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1028 10:59:47.833719  542642 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1028 10:59:47.833727  542642 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1028 10:59:47.833733  542642 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1028 10:59:47.833747  542642 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-533928/.minikube/addons for local assets ...
	I1028 10:59:47.833821  542642 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-533928/.minikube/files for local assets ...
	I1028 10:59:47.833859  542642 start.go:296] duration metric: took 109.886039ms for postStartSetup
	I1028 10:59:47.834224  542642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-673472
	I1028 10:59:47.852268  542642 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/config.json ...
	I1028 10:59:47.852581  542642 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 10:59:47.852646  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:47.870677  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 10:59:47.957879  542642 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1028 10:59:47.962634  542642 start.go:128] duration metric: took 14.94690152s to createHost
	I1028 10:59:47.962666  542642 start.go:83] releasing machines lock for "addons-673472", held for 14.947124395s
	I1028 10:59:47.962793  542642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-673472
	I1028 10:59:47.980075  542642 ssh_runner.go:195] Run: cat /version.json
	I1028 10:59:47.980150  542642 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 10:59:47.980166  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:47.980227  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 10:59:47.998345  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 10:59:47.998613  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 10:59:48.080597  542642 ssh_runner.go:195] Run: systemctl --version
	I1028 10:59:48.151991  542642 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 10:59:48.291683  542642 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 10:59:48.296533  542642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 10:59:48.315192  542642 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1028 10:59:48.315278  542642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 10:59:48.343139  542642 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1028 10:59:48.343171  542642 start.go:495] detecting cgroup driver to use...
	I1028 10:59:48.343208  542642 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1028 10:59:48.343261  542642 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 10:59:48.357854  542642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 10:59:48.368073  542642 docker.go:217] disabling cri-docker service (if available) ...
	I1028 10:59:48.368138  542642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 10:59:48.379997  542642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 10:59:48.393039  542642 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 10:59:48.469904  542642 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 10:59:48.550254  542642 docker.go:233] disabling docker service ...
	I1028 10:59:48.550324  542642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 10:59:48.569270  542642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 10:59:48.580866  542642 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 10:59:48.661689  542642 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 10:59:48.749585  542642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 10:59:48.760408  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 10:59:48.775824  542642 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 10:59:48.775879  542642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:59:48.785170  542642 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 10:59:48.785244  542642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:59:48.794496  542642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:59:48.803650  542642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:59:48.813248  542642 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 10:59:48.822440  542642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:59:48.832021  542642 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:59:48.846811  542642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:59:48.856333  542642 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 10:59:48.864921  542642 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 10:59:48.864980  542642 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 10:59:48.879815  542642 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 10:59:48.888314  542642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 10:59:48.963756  542642 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 10:59:49.067640  542642 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 10:59:49.067723  542642 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 10:59:49.071463  542642 start.go:563] Will wait 60s for crictl version
	I1028 10:59:49.071526  542642 ssh_runner.go:195] Run: which crictl
	I1028 10:59:49.075058  542642 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 10:59:49.108980  542642 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1028 10:59:49.109088  542642 ssh_runner.go:195] Run: crio --version
	I1028 10:59:49.147212  542642 ssh_runner.go:195] Run: crio --version
	I1028 10:59:49.184361  542642 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1028 10:59:49.186086  542642 cli_runner.go:164] Run: docker network inspect addons-673472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1028 10:59:49.202472  542642 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1028 10:59:49.206394  542642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 10:59:49.217166  542642 kubeadm.go:883] updating cluster {Name:addons-673472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-673472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 10:59:49.217311  542642 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 10:59:49.217364  542642 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 10:59:49.285638  542642 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 10:59:49.285663  542642 crio.go:433] Images already preloaded, skipping extraction
	I1028 10:59:49.285714  542642 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 10:59:49.320653  542642 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 10:59:49.320679  542642 cache_images.go:84] Images are preloaded, skipping loading
	I1028 10:59:49.320687  542642 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1028 10:59:49.320815  542642 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-673472 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-673472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 10:59:49.320881  542642 ssh_runner.go:195] Run: crio config
	I1028 10:59:49.366384  542642 cni.go:84] Creating CNI manager for ""
	I1028 10:59:49.366406  542642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 10:59:49.366418  542642 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 10:59:49.366441  542642 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-673472 NodeName:addons-673472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 10:59:49.366567  542642 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-673472"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 10:59:49.366629  542642 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 10:59:49.375496  542642 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 10:59:49.375568  542642 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 10:59:49.384261  542642 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1028 10:59:49.401131  542642 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 10:59:49.418088  542642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1028 10:59:49.434953  542642 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1028 10:59:49.438558  542642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 10:59:49.449288  542642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 10:59:49.524974  542642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 10:59:49.538071  542642 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472 for IP: 192.168.49.2
	I1028 10:59:49.538097  542642 certs.go:194] generating shared ca certs ...
	I1028 10:59:49.538115  542642 certs.go:226] acquiring lock for ca certs: {Name:mk4f171b5fc82d02323944775bf27bfd4cb01f5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:49.538236  542642 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-533928/.minikube/ca.key
	I1028 10:59:49.639824  542642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-533928/.minikube/ca.crt ...
	I1028 10:59:49.639868  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/ca.crt: {Name:mkd44132e8612cfbcdb9b8d86b1fe1f676ffdeda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:49.640072  542642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-533928/.minikube/ca.key ...
	I1028 10:59:49.640085  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/ca.key: {Name:mkf14aff199e8845f01b8ea4c55bad99ed133239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:49.640162  542642 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-533928/.minikube/proxy-client-ca.key
	I1028 10:59:49.852851  542642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-533928/.minikube/proxy-client-ca.crt ...
	I1028 10:59:49.852887  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/proxy-client-ca.crt: {Name:mkb535181adba9fa3c17366069da7c4c211ab9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:49.853064  542642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-533928/.minikube/proxy-client-ca.key ...
	I1028 10:59:49.853076  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/proxy-client-ca.key: {Name:mkbdd5c2c2158f7023fd6059f943bbe4bae61b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:49.853951  542642 certs.go:256] generating profile certs ...
	I1028 10:59:49.854046  542642 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.key
	I1028 10:59:49.854064  542642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt with IP's: []
	I1028 10:59:49.966208  542642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt ...
	I1028 10:59:49.966247  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: {Name:mk1a91296c0a0584dfd795afde0cd6124b219b2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:49.966457  542642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.key ...
	I1028 10:59:49.966473  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.key: {Name:mke76e2304da30762701329588da4e12fcf058eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:49.966569  542642 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.key.d3d5ad56
	I1028 10:59:49.966595  542642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.crt.d3d5ad56 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1028 10:59:50.066384  542642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.crt.d3d5ad56 ...
	I1028 10:59:50.066419  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.crt.d3d5ad56: {Name:mk26f4eae40046e2f9760be0736db8a4cf2aed4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:50.066619  542642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.key.d3d5ad56 ...
	I1028 10:59:50.066643  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.key.d3d5ad56: {Name:mke6dba6ec7cc173445d41f08f411d73cd4c6923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:50.066750  542642 certs.go:381] copying /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.crt.d3d5ad56 -> /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.crt
	I1028 10:59:50.066847  542642 certs.go:385] copying /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.key.d3d5ad56 -> /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.key
	I1028 10:59:50.066911  542642 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.key
	I1028 10:59:50.066937  542642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.crt with IP's: []
	I1028 10:59:50.225084  542642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.crt ...
	I1028 10:59:50.225117  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.crt: {Name:mk8517bcb80f98969c02fde259782834ba3d7d1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:50.225299  542642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.key ...
	I1028 10:59:50.225316  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.key: {Name:mk1574fd3c7aa35b7c3a8015ad57972a01c86130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:59:50.225597  542642 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 10:59:50.225651  542642 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/ca.pem (1078 bytes)
	I1028 10:59:50.225688  542642 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/cert.pem (1123 bytes)
	I1028 10:59:50.225724  542642 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-533928/.minikube/certs/key.pem (1675 bytes)
	I1028 10:59:50.226429  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 10:59:50.250065  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 10:59:50.272840  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 10:59:50.297142  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 10:59:50.319157  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 10:59:50.342951  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 10:59:50.367065  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 10:59:50.389802  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 10:59:50.412522  542642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-533928/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 10:59:50.435751  542642 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 10:59:50.452937  542642 ssh_runner.go:195] Run: openssl version
	I1028 10:59:50.458336  542642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 10:59:50.467859  542642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 10:59:50.471462  542642 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:59 /usr/share/ca-certificates/minikubeCA.pem
	I1028 10:59:50.471530  542642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 10:59:50.478466  542642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 10:59:50.487526  542642 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 10:59:50.490685  542642 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 10:59:50.490735  542642 kubeadm.go:392] StartCluster: {Name:addons-673472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-673472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:59:50.490823  542642 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 10:59:50.490883  542642 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 10:59:50.525092  542642 cri.go:89] found id: ""
	I1028 10:59:50.525168  542642 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 10:59:50.533742  542642 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 10:59:50.542479  542642 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1028 10:59:50.542541  542642 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 10:59:50.551003  542642 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 10:59:50.551027  542642 kubeadm.go:157] found existing configuration files:
	
	I1028 10:59:50.551080  542642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 10:59:50.558870  542642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 10:59:50.558941  542642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 10:59:50.566564  542642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 10:59:50.574624  542642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 10:59:50.574686  542642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 10:59:50.582723  542642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 10:59:50.591122  542642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 10:59:50.591178  542642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 10:59:50.599560  542642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 10:59:50.608182  542642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 10:59:50.608236  542642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 10:59:50.616624  542642 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1028 10:59:50.671384  542642 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-gcp\n", err: exit status 1
	I1028 10:59:50.723737  542642 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 10:59:59.460393  542642 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 10:59:59.460471  542642 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 10:59:59.460584  542642 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1028 10:59:59.460664  542642 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-gcp
	I1028 10:59:59.460700  542642 kubeadm.go:310] OS: Linux
	I1028 10:59:59.460753  542642 kubeadm.go:310] CGROUPS_CPU: enabled
	I1028 10:59:59.460837  542642 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1028 10:59:59.460886  542642 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1028 10:59:59.460922  542642 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1028 10:59:59.460965  542642 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1028 10:59:59.461030  542642 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1028 10:59:59.461095  542642 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1028 10:59:59.461153  542642 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1028 10:59:59.461194  542642 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1028 10:59:59.461267  542642 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 10:59:59.461412  542642 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 10:59:59.461527  542642 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 10:59:59.461595  542642 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 10:59:59.464242  542642 out.go:235]   - Generating certificates and keys ...
	I1028 10:59:59.464338  542642 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 10:59:59.464395  542642 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 10:59:59.464494  542642 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 10:59:59.464578  542642 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 10:59:59.464659  542642 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 10:59:59.464773  542642 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 10:59:59.464870  542642 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 10:59:59.464989  542642 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-673472 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1028 10:59:59.465070  542642 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 10:59:59.465204  542642 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-673472 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1028 10:59:59.465311  542642 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 10:59:59.465425  542642 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 10:59:59.465496  542642 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 10:59:59.465556  542642 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 10:59:59.465619  542642 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 10:59:59.465683  542642 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 10:59:59.465762  542642 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 10:59:59.465855  542642 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 10:59:59.465939  542642 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 10:59:59.466049  542642 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 10:59:59.466148  542642 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 10:59:59.467513  542642 out.go:235]   - Booting up control plane ...
	I1028 10:59:59.467597  542642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 10:59:59.467678  542642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 10:59:59.467747  542642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 10:59:59.467853  542642 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 10:59:59.467939  542642 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 10:59:59.467976  542642 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 10:59:59.468090  542642 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 10:59:59.468181  542642 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 10:59:59.468233  542642 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.60969ms
	I1028 10:59:59.468327  542642 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 10:59:59.468422  542642 kubeadm.go:310] [api-check] The API server is healthy after 4.502341846s
	I1028 10:59:59.468553  542642 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 10:59:59.468704  542642 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 10:59:59.468785  542642 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 10:59:59.468971  542642 kubeadm.go:310] [mark-control-plane] Marking the node addons-673472 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 10:59:59.469113  542642 kubeadm.go:310] [bootstrap-token] Using token: s6hekf.p6us0uvpwrt54ii9
	I1028 10:59:59.470775  542642 out.go:235]   - Configuring RBAC rules ...
	I1028 10:59:59.470918  542642 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 10:59:59.471027  542642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 10:59:59.471211  542642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 10:59:59.471402  542642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 10:59:59.471522  542642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 10:59:59.471626  542642 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 10:59:59.471854  542642 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 10:59:59.471911  542642 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 10:59:59.471957  542642 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 10:59:59.471964  542642 kubeadm.go:310] 
	I1028 10:59:59.472023  542642 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 10:59:59.472037  542642 kubeadm.go:310] 
	I1028 10:59:59.472239  542642 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 10:59:59.472264  542642 kubeadm.go:310] 
	I1028 10:59:59.472380  542642 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 10:59:59.472508  542642 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 10:59:59.472597  542642 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 10:59:59.472609  542642 kubeadm.go:310] 
	I1028 10:59:59.472702  542642 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 10:59:59.472721  542642 kubeadm.go:310] 
	I1028 10:59:59.472812  542642 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 10:59:59.472822  542642 kubeadm.go:310] 
	I1028 10:59:59.472901  542642 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 10:59:59.473004  542642 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 10:59:59.473103  542642 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 10:59:59.473116  542642 kubeadm.go:310] 
	I1028 10:59:59.473248  542642 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 10:59:59.473323  542642 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 10:59:59.473329  542642 kubeadm.go:310] 
	I1028 10:59:59.473392  542642 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token s6hekf.p6us0uvpwrt54ii9 \
	I1028 10:59:59.473472  542642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:22f0f7d5663ef838083a14a9e686edb004104fc5a60ae6df0f45c5a76351185e \
	I1028 10:59:59.473491  542642 kubeadm.go:310] 	--control-plane 
	I1028 10:59:59.473497  542642 kubeadm.go:310] 
	I1028 10:59:59.473572  542642 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 10:59:59.473586  542642 kubeadm.go:310] 
	I1028 10:59:59.473686  542642 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s6hekf.p6us0uvpwrt54ii9 \
	I1028 10:59:59.473848  542642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:22f0f7d5663ef838083a14a9e686edb004104fc5a60ae6df0f45c5a76351185e 
	I1028 10:59:59.473867  542642 cni.go:84] Creating CNI manager for ""
	I1028 10:59:59.473879  542642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 10:59:59.475888  542642 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 10:59:59.477334  542642 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 10:59:59.482009  542642 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 10:59:59.482040  542642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 10:59:59.500964  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 10:59:59.709902  542642 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 10:59:59.709975  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:59:59.709977  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-673472 minikube.k8s.io/updated_at=2024_10_28T10_59_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=addons-673472 minikube.k8s.io/primary=true
	I1028 10:59:59.827958  542642 ops.go:34] apiserver oom_adj: -16
	I1028 10:59:59.827991  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:00.328362  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:00.828250  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:01.328889  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:01.828850  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:02.328156  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:02.828817  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:03.328466  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:03.828414  542642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:00:03.927985  542642 kubeadm.go:1113] duration metric: took 4.218075435s to wait for elevateKubeSystemPrivileges
	I1028 11:00:03.928025  542642 kubeadm.go:394] duration metric: took 13.43729589s to StartCluster
	I1028 11:00:03.928051  542642 settings.go:142] acquiring lock: {Name:mk4b7cc0753ef8271ffd0ab99530eca53ed30f8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:00:03.928255  542642 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-533928/kubeconfig
	I1028 11:00:03.928771  542642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-533928/kubeconfig: {Name:mk7ef4f3d61e5f33766771edfad48c83b564ef6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:00:03.928998  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 11:00:03.929046  542642 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:00:03.929156  542642 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1028 11:00:03.929291  542642 addons.go:69] Setting yakd=true in profile "addons-673472"
	I1028 11:00:03.929311  542642 config.go:182] Loaded profile config "addons-673472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:00:03.929304  542642 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-673472"
	I1028 11:00:03.929322  542642 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-673472"
	I1028 11:00:03.929335  542642 addons.go:69] Setting volcano=true in profile "addons-673472"
	I1028 11:00:03.929346  542642 addons.go:234] Setting addon volcano=true in "addons-673472"
	I1028 11:00:03.929332  542642 addons.go:69] Setting storage-provisioner=true in profile "addons-673472"
	I1028 11:00:03.929361  542642 addons.go:69] Setting ingress-dns=true in profile "addons-673472"
	I1028 11:00:03.929371  542642 addons.go:234] Setting addon storage-provisioner=true in "addons-673472"
	I1028 11:00:03.929373  542642 addons.go:69] Setting gcp-auth=true in profile "addons-673472"
	I1028 11:00:03.929364  542642 addons.go:69] Setting default-storageclass=true in profile "addons-673472"
	I1028 11:00:03.929384  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929386  542642 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-673472"
	I1028 11:00:03.929393  542642 mustload.go:65] Loading cluster: addons-673472
	I1028 11:00:03.929375  542642 addons.go:234] Setting addon ingress-dns=true in "addons-673472"
	I1028 11:00:03.929399  542642 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-673472"
	I1028 11:00:03.929411  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929425  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929417  542642 addons.go:69] Setting inspektor-gadget=true in profile "addons-673472"
	I1028 11:00:03.929431  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929440  542642 addons.go:234] Setting addon inspektor-gadget=true in "addons-673472"
	I1028 11:00:03.929468  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929526  542642 config.go:182] Loaded profile config "addons-673472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:00:03.929764  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.929839  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.929908  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.929934  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.929935  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.929948  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.930127  542642 addons.go:69] Setting cloud-spanner=true in profile "addons-673472"
	I1028 11:00:03.930152  542642 addons.go:234] Setting addon cloud-spanner=true in "addons-673472"
	I1028 11:00:03.930186  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.930184  542642 addons.go:69] Setting metrics-server=true in profile "addons-673472"
	I1028 11:00:03.930256  542642 addons.go:234] Setting addon metrics-server=true in "addons-673472"
	I1028 11:00:03.930332  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.930641  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.930941  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.929350  542642 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-673472"
	I1028 11:00:03.931298  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929392  542642 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-673472"
	I1028 11:00:03.932095  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.932408  542642 addons.go:69] Setting volumesnapshots=true in profile "addons-673472"
	I1028 11:00:03.932473  542642 addons.go:234] Setting addon volumesnapshots=true in "addons-673472"
	I1028 11:00:03.929316  542642 addons.go:234] Setting addon yakd=true in "addons-673472"
	I1028 11:00:03.932523  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.932584  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.933046  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.933213  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.933240  542642 out.go:177] * Verifying Kubernetes components...
	I1028 11:00:03.929324  542642 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-673472"
	I1028 11:00:03.933348  542642 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-673472"
	I1028 11:00:03.933599  542642 addons.go:69] Setting registry=true in profile "addons-673472"
	I1028 11:00:03.933629  542642 addons.go:234] Setting addon registry=true in "addons-673472"
	I1028 11:00:03.933665  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929354  542642 addons.go:69] Setting ingress=true in profile "addons-673472"
	I1028 11:00:03.935533  542642 addons.go:234] Setting addon ingress=true in "addons-673472"
	I1028 11:00:03.935595  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.929374  542642 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-673472"
	I1028 11:00:03.935939  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:03.932486  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.942401  542642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:00:03.953361  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.953371  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.953453  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.953459  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:03.962021  542642 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:00:03.965883  542642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:00:03.965914  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:00:03.965990  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:03.972247  542642 host.go:66] Checking if "addons-673472" exists ...
	W1028 11:00:03.974727  542642 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1028 11:00:03.977323  542642 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1028 11:00:03.978481  542642 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1028 11:00:03.978551  542642 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1028 11:00:03.978889  542642 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1028 11:00:03.978914  542642 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1028 11:00:03.978976  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:03.983666  542642 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 11:00:03.983960  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1028 11:00:03.984046  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:03.983728  542642 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 11:00:03.984832  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1028 11:00:03.984896  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:03.986121  542642 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1028 11:00:03.989150  542642 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1028 11:00:03.989174  542642 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1028 11:00:03.989236  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:03.996927  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1028 11:00:03.998673  542642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1028 11:00:03.998743  542642 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1028 11:00:03.998817  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.025876  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.026038  542642 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1028 11:00:04.031799  542642 out.go:177]   - Using image docker.io/registry:2.8.3
	I1028 11:00:04.031955  542642 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1028 11:00:04.032005  542642 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1028 11:00:04.033861  542642 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1028 11:00:04.033890  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1028 11:00:04.033954  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.034344  542642 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 11:00:04.034361  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1028 11:00:04.034411  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.034764  542642 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 11:00:04.034779  542642 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 11:00:04.034838  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.044968  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.045750  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.052954  542642 addons.go:234] Setting addon default-storageclass=true in "addons-673472"
	I1028 11:00:04.053016  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:04.053456  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:04.056887  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.062397  542642 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-673472"
	I1028 11:00:04.062452  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:04.062942  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:04.064878  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.067610  542642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1028 11:00:04.068247  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.068985  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1028 11:00:04.070493  542642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 11:00:04.070580  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1028 11:00:04.073569  542642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 11:00:04.073655  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1028 11:00:04.076838  542642 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 11:00:04.076878  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1028 11:00:04.076993  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.077626  542642 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1028 11:00:04.079171  542642 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1028 11:00:04.079188  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1028 11:00:04.079240  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.079406  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1028 11:00:04.080887  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1028 11:00:04.082292  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1028 11:00:04.082838  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.084942  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1028 11:00:04.086241  542642 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1028 11:00:04.087505  542642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1028 11:00:04.087526  542642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1028 11:00:04.087604  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.093486  542642 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:00:04.093511  542642 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:00:04.093574  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.095476  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.096793  542642 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1028 11:00:04.098375  542642 out.go:177]   - Using image docker.io/busybox:stable
	I1028 11:00:04.099672  542642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 11:00:04.099696  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1028 11:00:04.099761  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:04.100944  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.114810  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.118312  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.120239  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.139369  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:04.144237  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	W1028 11:00:04.207955  542642 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1028 11:00:04.207999  542642 retry.go:31] will retry after 271.063266ms: ssh: handshake failed: EOF
	I1028 11:00:04.227147  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 11:00:04.321827  542642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:00:04.333640  542642 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1028 11:00:04.333672  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1028 11:00:04.508853  542642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1028 11:00:04.508937  542642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1028 11:00:04.512691  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 11:00:04.519934  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1028 11:00:04.520569  542642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 11:00:04.520594  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1028 11:00:04.605692  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1028 11:00:04.618926  542642 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1028 11:00:04.618960  542642 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1028 11:00:04.627789  542642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1028 11:00:04.627820  542642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1028 11:00:04.706319  542642 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1028 11:00:04.706415  542642 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1028 11:00:04.710259  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 11:00:04.718156  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 11:00:04.726716  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 11:00:04.806618  542642 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1028 11:00:04.806721  542642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1028 11:00:04.810411  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:00:04.812556  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:00:04.820399  542642 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1028 11:00:04.820428  542642 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1028 11:00:04.907872  542642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1028 11:00:04.907905  542642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1028 11:00:05.017060  542642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 11:00:05.017147  542642 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 11:00:05.022649  542642 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1028 11:00:05.022734  542642 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1028 11:00:05.119250  542642 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1028 11:00:05.119334  542642 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1028 11:00:05.126092  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 11:00:05.207687  542642 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1028 11:00:05.207733  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1028 11:00:05.410039  542642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1028 11:00:05.410086  542642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1028 11:00:05.425916  542642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 11:00:05.425953  542642 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 11:00:05.521360  542642 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1028 11:00:05.521390  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1028 11:00:05.610953  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1028 11:00:05.828153  542642 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 11:00:05.828181  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1028 11:00:06.013272  542642 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.786075969s)
	I1028 11:00:06.013525  542642 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1028 11:00:06.013494  542642 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.691512759s)
	I1028 11:00:06.014835  542642 node_ready.go:35] waiting up to 6m0s for node "addons-673472" to be "Ready" ...
	I1028 11:00:06.025963  542642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1028 11:00:06.026008  542642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1028 11:00:06.113675  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 11:00:06.128466  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1028 11:00:06.410201  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 11:00:06.715954  542642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1028 11:00:06.715991  542642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1028 11:00:06.811583  542642 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-673472" context rescaled to 1 replicas
	I1028 11:00:07.226692  542642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1028 11:00:07.226725  542642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1028 11:00:07.706702  542642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1028 11:00:07.706749  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1028 11:00:07.919529  542642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1028 11:00:07.919659  542642 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1028 11:00:08.016143  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.503416177s)
	I1028 11:00:08.024375  542642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1028 11:00:08.024461  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1028 11:00:08.029041  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:08.218644  542642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1028 11:00:08.218672  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1028 11:00:08.330261  542642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 11:00:08.330292  542642 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1028 11:00:08.606760  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 11:00:08.906361  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.386307705s)
	I1028 11:00:08.906662  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.196323131s)
	I1028 11:00:08.906727  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.300876329s)
	I1028 11:00:10.023181  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.304981234s)
	I1028 11:00:10.023228  542642 addons.go:475] Verifying addon ingress=true in "addons-673472"
	I1028 11:00:10.023230  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.296476835s)
	I1028 11:00:10.023354  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.210771619s)
	I1028 11:00:10.023323  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.212822964s)
	I1028 11:00:10.023615  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.897444273s)
	I1028 11:00:10.023776  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.412725651s)
	I1028 11:00:10.023797  542642 addons.go:475] Verifying addon registry=true in "addons-673472"
	I1028 11:00:10.024106  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.910390692s)
	I1028 11:00:10.024166  542642 addons.go:475] Verifying addon metrics-server=true in "addons-673472"
	I1028 11:00:10.024204  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.895689846s)
	I1028 11:00:10.025078  542642 out.go:177] * Verifying registry addon...
	I1028 11:00:10.025112  542642 out.go:177] * Verifying ingress addon...
	I1028 11:00:10.026171  542642 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-673472 service yakd-dashboard -n yakd-dashboard
	
	I1028 11:00:10.028979  542642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1028 11:00:10.029010  542642 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1028 11:00:10.037917  542642 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1028 11:00:10.037939  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:10.038361  542642 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1028 11:00:10.038383  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1028 11:00:10.040104  542642 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1028 11:00:10.525217  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:10.611986  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:10.613144  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:10.748755  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.338472038s)
	W1028 11:00:10.748844  542642 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 11:00:10.748875  542642 retry.go:31] will retry after 133.009364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 11:00:10.882487  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 11:00:11.032534  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:11.033232  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:11.210423  542642 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1028 11:00:11.210510  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:11.240863  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:11.332257  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.725366649s)
	I1028 11:00:11.332305  542642 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-673472"
	I1028 11:00:11.333950  542642 out.go:177] * Verifying csi-hostpath-driver addon...
	I1028 11:00:11.336375  542642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1028 11:00:11.342096  542642 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1028 11:00:11.342124  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:11.423475  542642 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1028 11:00:11.440624  542642 addons.go:234] Setting addon gcp-auth=true in "addons-673472"
	I1028 11:00:11.440716  542642 host.go:66] Checking if "addons-673472" exists ...
	I1028 11:00:11.441129  542642 cli_runner.go:164] Run: docker container inspect addons-673472 --format={{.State.Status}}
	I1028 11:00:11.457258  542642 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1028 11:00:11.457325  542642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673472
	I1028 11:00:11.474228  542642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/addons-673472/id_rsa Username:docker}
	I1028 11:00:11.533017  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:11.533428  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:11.839750  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:12.031897  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:12.032437  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:12.340276  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:12.533142  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:12.533589  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:12.840428  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:13.018029  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:13.033083  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:13.033461  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:13.340072  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:13.532568  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:13.533225  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:13.709023  542642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.826484109s)
	I1028 11:00:13.709118  542642 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.251824215s)
	I1028 11:00:13.711424  542642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 11:00:13.712924  542642 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1028 11:00:13.714330  542642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1028 11:00:13.714371  542642 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1028 11:00:13.733018  542642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1028 11:00:13.733053  542642 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1028 11:00:13.752013  542642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 11:00:13.752036  542642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1028 11:00:13.769564  542642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 11:00:13.839914  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:14.032298  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:14.032517  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:14.127065  542642 addons.go:475] Verifying addon gcp-auth=true in "addons-673472"
	I1028 11:00:14.128722  542642 out.go:177] * Verifying gcp-auth addon...
	I1028 11:00:14.130951  542642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1028 11:00:14.133760  542642 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1028 11:00:14.133782  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:14.340285  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:14.532982  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:14.533288  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:14.634613  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:14.840143  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:15.019148  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:15.032448  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:15.032958  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:15.134336  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:15.340879  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:15.533080  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:15.533303  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:15.634764  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:15.839750  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:16.031926  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:16.032413  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:16.135004  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:16.340373  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:16.532536  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:16.532982  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:16.634735  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:16.839939  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:17.032448  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:17.032749  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:17.134502  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:17.340641  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:17.518436  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:17.532411  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:17.532793  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:17.634354  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:17.840938  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:18.032048  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:18.032293  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:18.134554  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:18.339833  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:18.532357  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:18.532762  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:18.634210  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:18.840690  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:19.032462  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:19.032641  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:19.134168  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:19.340685  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:19.532255  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:19.532666  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:19.634194  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:19.841622  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:20.018987  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:20.032348  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:20.032674  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:20.134019  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:20.340070  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:20.532422  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:20.532994  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:20.634357  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:20.840227  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:21.032618  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:21.033102  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:21.134873  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:21.339907  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:21.532108  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:21.532579  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:21.635127  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:21.840405  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:22.032819  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:22.033326  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:22.135122  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:22.339809  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:22.518643  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:22.532568  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:22.533242  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:22.634588  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:22.841585  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:23.032631  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:23.033127  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:23.134959  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:23.339960  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:23.532979  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:23.533340  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:23.635104  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:23.840555  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:24.031958  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:24.032313  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:24.135208  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:24.341021  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:24.518779  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:24.532954  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:24.533358  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:24.634814  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:24.840281  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:25.033068  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:25.033459  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:25.137076  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:25.340495  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:25.532613  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:25.532961  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:25.634334  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:25.840534  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:26.033051  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:26.033303  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:26.134656  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:26.340021  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:26.519479  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:26.532948  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:26.533361  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:26.634677  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:26.840102  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:27.032835  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:27.033220  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:27.134588  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:27.340305  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:27.533159  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:27.533609  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:27.634829  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:27.840239  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:28.032417  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:28.033104  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:28.134251  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:28.340451  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:28.533263  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:28.533794  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:28.634529  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:28.840852  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:29.018852  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:29.032228  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:29.032824  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:29.134287  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:29.340648  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:29.533084  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:29.533408  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:29.635087  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:29.840447  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:30.032021  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:30.032563  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:30.135111  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:30.340261  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:30.532981  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:30.533379  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:30.634986  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:30.841062  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:31.020622  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:31.032001  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:31.032707  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:31.135107  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:31.340063  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:31.532892  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:31.533331  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:31.634666  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:31.839664  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:32.032046  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:32.032349  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:32.134668  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:32.339703  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:32.532501  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:32.533004  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:32.634565  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:32.839946  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:33.032104  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:33.032533  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:33.133887  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:33.339750  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:33.521653  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:33.532019  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:33.532404  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:33.634455  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:33.840362  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:34.031751  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:34.032123  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:34.134704  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:34.339920  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:34.532578  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:34.532934  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:34.634561  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:34.839727  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:35.032216  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:35.032922  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:35.134202  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:35.340390  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:35.533051  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:35.533473  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:35.634951  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:35.840086  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:36.018743  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:36.032357  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:36.032784  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:36.134355  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:36.340342  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:36.532992  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:36.533376  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:36.634655  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:36.840006  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:37.032478  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:37.033022  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:37.134411  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:37.340588  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:37.532455  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:37.532816  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:37.634265  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:37.840402  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:38.032036  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:38.032463  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:38.134650  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:38.339611  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:38.518268  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:38.532379  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:38.532934  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:38.634357  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:38.840381  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:39.032763  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:39.033084  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:39.134336  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:39.340533  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:39.532865  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:39.533246  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:39.634678  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:39.839778  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:40.032044  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:40.032580  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:40.134883  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:40.339856  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:40.518324  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:40.532250  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:40.532693  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:40.634543  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:40.839924  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:41.031724  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:41.032109  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:41.134925  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:41.340234  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:41.532874  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:41.533484  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:41.634923  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:41.839972  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:42.032340  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:42.033008  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:42.134638  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:42.339653  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:42.518410  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:42.532308  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:42.532744  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:42.634174  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:42.840369  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:43.032777  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:43.033157  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:43.134517  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:43.340512  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:43.533090  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:43.533479  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:43.634890  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:43.840070  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:44.032473  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:44.033044  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:44.133729  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:44.339731  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:44.518656  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:44.532616  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:44.533293  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:44.634675  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:44.839609  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:45.031690  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:45.032408  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:45.134577  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:45.339862  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:45.533001  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:45.533328  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:45.634820  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:45.840147  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:46.032680  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:46.033230  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:46.134542  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:46.339620  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:46.532313  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:46.533065  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:46.634276  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:46.840571  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:47.018057  542642 node_ready.go:53] node "addons-673472" has status "Ready":"False"
	I1028 11:00:47.032700  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:47.033201  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:47.134712  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:47.339996  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:47.533566  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:47.535158  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:47.634705  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:47.840323  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:48.032891  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:48.033554  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:48.135125  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:48.341082  542642 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1028 11:00:48.341103  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:48.519270  542642 node_ready.go:49] node "addons-673472" has status "Ready":"True"
	I1028 11:00:48.519364  542642 node_ready.go:38] duration metric: took 42.504448997s for node "addons-673472" to be "Ready" ...
	I1028 11:00:48.519383  542642 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:00:48.532873  542642 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace to be "Ready" ...
	I1028 11:00:48.538600  542642 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1028 11:00:48.538625  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:48.540010  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:48.634740  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:48.840061  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:49.037329  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:49.037959  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:49.137704  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:49.341158  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:49.533448  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:49.533769  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:49.635369  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:49.842214  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:50.033503  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:50.033994  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:50.212249  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:50.409722  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:50.534110  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:50.535121  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:50.611537  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:00:50.635531  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:50.842955  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:51.034283  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:51.035568  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:51.135341  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:51.342518  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:51.532715  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:51.533127  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:51.634642  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:51.841910  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:52.033703  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:52.034134  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:52.135254  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:52.341350  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:52.533074  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:52.533473  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:52.634985  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:52.841191  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:53.032432  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:53.032929  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:53.038967  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:00:53.134977  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:53.342086  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:53.534117  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:53.534303  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:53.634596  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:53.842141  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:54.033150  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:54.033798  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:54.134916  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:54.340966  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:54.533868  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:54.534054  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:54.634837  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:54.841351  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:55.033162  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:55.033629  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:55.135251  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:55.343069  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:55.533683  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:55.534011  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:55.538521  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:00:55.634614  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:55.841853  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:56.033113  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:56.033323  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:56.134286  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:56.342244  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:56.534343  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:56.534563  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:56.635087  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:56.841851  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:57.033870  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:57.034195  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:57.135142  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:57.343291  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:57.533146  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:57.533355  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:57.539523  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:00:57.635013  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:57.843413  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:58.032675  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:58.033181  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:58.135325  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:58.341465  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:58.533138  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:58.533305  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:58.634383  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:58.841730  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:59.032925  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:59.033298  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:59.135767  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:59.340854  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:00:59.534406  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:00:59.534730  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:00:59.634079  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:00:59.841665  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:00.033109  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:00.033265  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:00.038431  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:00.135379  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:00.341771  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:00.533825  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:00.534007  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:00.635490  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:00.841832  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:01.033296  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:01.033699  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:01.134828  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:01.340920  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:01.535910  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:01.536068  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:01.634991  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:01.840999  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:02.033058  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:02.033403  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:02.134186  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:02.341390  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:02.533200  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:02.533519  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:02.538205  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:02.634199  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:02.841616  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:03.032687  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:03.033026  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:03.135053  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:03.341963  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:03.534040  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:03.534255  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:03.635173  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:03.840876  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:04.033154  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:04.033687  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:04.137514  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:04.341739  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:04.533516  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:04.533882  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:04.539188  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:04.635283  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:04.841442  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:05.032729  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:05.033124  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:05.136681  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:05.340583  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:05.533528  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:05.533935  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:05.634746  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:05.841153  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:06.033894  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:06.034118  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:06.135809  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:06.341845  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:06.532772  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:06.533071  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:06.634900  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:06.840948  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:07.033499  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:07.033971  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:07.038708  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:07.134489  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:07.341453  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:07.533460  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:07.534078  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:07.634752  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:07.841145  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:08.032456  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:08.032600  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:08.134493  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:08.341228  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:08.532960  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:08.533418  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:08.634946  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:08.841209  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:09.032428  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:09.032860  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:09.133910  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:09.340994  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:09.533027  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:09.533325  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:09.538036  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:09.635269  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:09.842042  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:10.033156  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:10.034120  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:10.135121  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:10.409633  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:10.612232  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:10.613683  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:10.708956  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:10.911211  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:11.109514  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:11.111442  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:11.217964  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:11.342875  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:11.534357  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:11.536302  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:11.539401  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:11.634349  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:11.841983  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:12.034345  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:12.035360  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:12.135368  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:12.342090  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:12.534830  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:12.535385  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:12.635476  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:12.841981  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:13.033585  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:13.034659  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:13.135255  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:13.341464  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:13.534090  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:13.534423  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:13.634870  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:13.840481  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:14.033127  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:14.033709  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:14.038134  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:14.134521  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:14.341478  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:14.533571  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:14.533775  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:14.634858  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:14.840881  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:15.033065  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:15.033576  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:15.135105  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:15.341248  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:15.533403  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:15.533889  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:15.634506  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:15.841862  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:16.033448  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:16.033889  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:16.038574  542642 pod_ready.go:103] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:16.134315  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:16.341749  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:16.533707  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:16.534255  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:16.635247  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:16.841160  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:17.032355  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:17.032720  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:17.135060  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:17.341269  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:17.533498  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:17.533806  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:17.538084  542642 pod_ready.go:93] pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace has status "Ready":"True"
	I1028 11:01:17.538109  542642 pod_ready.go:82] duration metric: took 29.005201782s for pod "amd-gpu-device-plugin-rbj2l" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.538121  542642 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-67wn8" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.542973  542642 pod_ready.go:93] pod "coredns-7c65d6cfc9-67wn8" in "kube-system" namespace has status "Ready":"True"
	I1028 11:01:17.542994  542642 pod_ready.go:82] duration metric: took 4.866917ms for pod "coredns-7c65d6cfc9-67wn8" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.543018  542642 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.547786  542642 pod_ready.go:93] pod "etcd-addons-673472" in "kube-system" namespace has status "Ready":"True"
	I1028 11:01:17.547816  542642 pod_ready.go:82] duration metric: took 4.791299ms for pod "etcd-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.547829  542642 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.552584  542642 pod_ready.go:93] pod "kube-apiserver-addons-673472" in "kube-system" namespace has status "Ready":"True"
	I1028 11:01:17.552607  542642 pod_ready.go:82] duration metric: took 4.769593ms for pod "kube-apiserver-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.552621  542642 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.557407  542642 pod_ready.go:93] pod "kube-controller-manager-addons-673472" in "kube-system" namespace has status "Ready":"True"
	I1028 11:01:17.557431  542642 pod_ready.go:82] duration metric: took 4.801768ms for pod "kube-controller-manager-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.557447  542642 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bx7gb" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.634735  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:17.842241  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:17.936923  542642 pod_ready.go:93] pod "kube-proxy-bx7gb" in "kube-system" namespace has status "Ready":"True"
	I1028 11:01:17.936950  542642 pod_ready.go:82] duration metric: took 379.494749ms for pod "kube-proxy-bx7gb" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:17.936965  542642 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:18.032935  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:18.033167  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:18.134835  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:18.336919  542642 pod_ready.go:93] pod "kube-scheduler-addons-673472" in "kube-system" namespace has status "Ready":"True"
	I1028 11:01:18.336944  542642 pod_ready.go:82] duration metric: took 399.970822ms for pod "kube-scheduler-addons-673472" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:18.336956  542642 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace to be "Ready" ...
	I1028 11:01:18.340532  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:18.534909  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:18.536030  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:18.635514  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:18.845416  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:19.033842  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:19.034226  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:19.134919  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:19.341639  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:19.533828  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:19.534372  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:19.634892  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:19.840581  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:20.033493  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:01:20.033880  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:20.134191  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:20.342213  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:20.343655  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:20.533812  542642 kapi.go:107] duration metric: took 1m10.504828753s to wait for kubernetes.io/minikube-addons=registry ...
	I1028 11:01:20.534024  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:20.634168  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:20.841313  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:21.032943  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:21.134466  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:21.342676  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:21.534559  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:21.635111  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:21.842094  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:22.033536  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:22.135157  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:22.344346  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:22.345191  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:22.533924  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:22.634477  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:22.841611  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:23.033994  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:23.135350  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:23.341871  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:23.535034  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:23.635334  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:23.841785  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:24.032671  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:24.135058  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:24.341235  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:24.535538  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:24.635117  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:24.841542  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:24.843062  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:25.035202  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:25.135226  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:25.341970  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:25.533683  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:25.635778  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:25.841215  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:26.034307  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:26.207043  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:01:26.508518  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:26.599567  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:26.811174  542642 kapi.go:107] duration metric: took 1m12.680214644s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1028 11:01:26.813201  542642 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-673472 cluster.
	I1028 11:01:26.814660  542642 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1028 11:01:26.816531  542642 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1028 11:01:26.921598  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:26.921632  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:27.033951  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:27.409167  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:27.533133  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:27.841444  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:28.033365  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:28.342270  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:28.532917  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:28.845716  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:29.033131  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:29.341164  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:29.342966  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:29.534745  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:29.841647  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:30.033575  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:30.342079  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:30.533487  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:30.842021  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:31.034031  542642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:01:31.342128  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:31.343654  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:31.534495  542642 kapi.go:107] duration metric: took 1m21.505480203s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1028 11:01:31.931485  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:32.376419  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:32.841919  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:33.342170  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:33.841820  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:33.843460  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:34.341093  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:34.843097  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:35.341309  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:35.841956  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:36.346779  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:36.351204  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:36.841895  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:37.342406  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:37.854289  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:38.346854  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:38.842971  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:38.843508  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:39.344042  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:39.841491  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:40.341886  542642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:01:40.842246  542642 kapi.go:107] duration metric: took 1m29.505873588s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1028 11:01:40.843126  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:40.844354  542642 out.go:177] * Enabled addons: ingress-dns, inspektor-gadget, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1028 11:01:40.846010  542642 addons.go:510] duration metric: took 1m36.916862098s for enable addons: enabled=[ingress-dns inspektor-gadget amd-gpu-device-plugin cloud-spanner nvidia-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1028 11:01:42.843444  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:44.843554  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:47.343498  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:49.344350  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:51.842829  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:53.843560  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:55.843632  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:01:57.843974  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:02:00.343576  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:02:02.343881  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:02:04.843727  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:02:07.343564  542642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"False"
	I1028 11:02:08.844486  542642 pod_ready.go:93] pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace has status "Ready":"True"
	I1028 11:02:08.844510  542642 pod_ready.go:82] duration metric: took 50.507548227s for pod "metrics-server-84c5f94fbc-wbsls" in "kube-system" namespace to be "Ready" ...
	I1028 11:02:08.844522  542642 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zktff" in "kube-system" namespace to be "Ready" ...
	I1028 11:02:08.848837  542642 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-zktff" in "kube-system" namespace has status "Ready":"True"
	I1028 11:02:08.848860  542642 pod_ready.go:82] duration metric: took 4.331704ms for pod "nvidia-device-plugin-daemonset-zktff" in "kube-system" namespace to be "Ready" ...
	I1028 11:02:08.848879  542642 pod_ready.go:39] duration metric: took 1m20.329480138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:02:08.848901  542642 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:02:08.848936  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 11:02:08.849006  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 11:02:08.885551  542642 cri.go:89] found id: "87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802"
	I1028 11:02:08.885583  542642 cri.go:89] found id: ""
	I1028 11:02:08.885595  542642 logs.go:282] 1 containers: [87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802]
	I1028 11:02:08.885647  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:08.889170  542642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 11:02:08.889235  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 11:02:08.936962  542642 cri.go:89] found id: "86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163"
	I1028 11:02:08.936987  542642 cri.go:89] found id: ""
	I1028 11:02:08.936999  542642 logs.go:282] 1 containers: [86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163]
	I1028 11:02:08.937062  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:08.940948  542642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 11:02:08.941022  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 11:02:08.977719  542642 cri.go:89] found id: "558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b"
	I1028 11:02:08.977746  542642 cri.go:89] found id: ""
	I1028 11:02:08.977754  542642 logs.go:282] 1 containers: [558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b]
	I1028 11:02:08.977798  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:08.981284  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 11:02:08.981345  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 11:02:09.016943  542642 cri.go:89] found id: "f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd"
	I1028 11:02:09.016973  542642 cri.go:89] found id: ""
	I1028 11:02:09.016983  542642 logs.go:282] 1 containers: [f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd]
	I1028 11:02:09.017045  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:09.020959  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 11:02:09.021063  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 11:02:09.058112  542642 cri.go:89] found id: "d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e"
	I1028 11:02:09.058134  542642 cri.go:89] found id: ""
	I1028 11:02:09.058142  542642 logs.go:282] 1 containers: [d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e]
	I1028 11:02:09.058206  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:09.061716  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 11:02:09.061790  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 11:02:09.099845  542642 cri.go:89] found id: "780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c"
	I1028 11:02:09.099873  542642 cri.go:89] found id: ""
	I1028 11:02:09.099883  542642 logs.go:282] 1 containers: [780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c]
	I1028 11:02:09.099951  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:09.103773  542642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 11:02:09.103866  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 11:02:09.141449  542642 cri.go:89] found id: "d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769"
	I1028 11:02:09.141473  542642 cri.go:89] found id: ""
	I1028 11:02:09.141484  542642 logs.go:282] 1 containers: [d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769]
	I1028 11:02:09.141537  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:09.145006  542642 logs.go:123] Gathering logs for kubelet ...
	I1028 11:02:09.145035  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 11:02:09.206966  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.163425    1632 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-673472' and this object
	W1028 11:02:09.207147  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.163484    1632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	W1028 11:02:09.207271  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.164087    1632 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-673472' and this object
	W1028 11:02:09.207422  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.164135    1632 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	I1028 11:02:09.234931  542642 logs.go:123] Gathering logs for dmesg ...
	I1028 11:02:09.234980  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 11:02:09.249356  542642 logs.go:123] Gathering logs for kube-apiserver [87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802] ...
	I1028 11:02:09.249394  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802"
	I1028 11:02:09.296227  542642 logs.go:123] Gathering logs for coredns [558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b] ...
	I1028 11:02:09.296271  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b"
	I1028 11:02:09.336130  542642 logs.go:123] Gathering logs for kube-scheduler [f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd] ...
	I1028 11:02:09.336179  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd"
	I1028 11:02:09.378871  542642 logs.go:123] Gathering logs for kube-proxy [d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e] ...
	I1028 11:02:09.378908  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e"
	I1028 11:02:09.414200  542642 logs.go:123] Gathering logs for kindnet [d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769] ...
	I1028 11:02:09.414233  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769"
	I1028 11:02:09.451220  542642 logs.go:123] Gathering logs for describe nodes ...
	I1028 11:02:09.451256  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 11:02:09.559383  542642 logs.go:123] Gathering logs for etcd [86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163] ...
	I1028 11:02:09.559421  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163"
	I1028 11:02:09.610227  542642 logs.go:123] Gathering logs for kube-controller-manager [780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c] ...
	I1028 11:02:09.610267  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c"
	I1028 11:02:09.671516  542642 logs.go:123] Gathering logs for CRI-O ...
	I1028 11:02:09.671558  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 11:02:09.743818  542642 logs.go:123] Gathering logs for container status ...
	I1028 11:02:09.743868  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 11:02:09.792271  542642 out.go:358] Setting ErrFile to fd 2...
	I1028 11:02:09.792316  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 11:02:09.792384  542642 out.go:270] X Problems detected in kubelet:
	W1028 11:02:09.792397  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.163425    1632 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-673472' and this object
	W1028 11:02:09.792410  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.163484    1632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	W1028 11:02:09.792423  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.164087    1632 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-673472' and this object
	W1028 11:02:09.792430  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.164135    1632 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	I1028 11:02:09.792436  542642 out.go:358] Setting ErrFile to fd 2...
	I1028 11:02:09.792443  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:02:19.793985  542642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:02:19.808359  542642 api_server.go:72] duration metric: took 2m15.879271272s to wait for apiserver process to appear ...
	I1028 11:02:19.808384  542642 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:02:19.808428  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 11:02:19.808480  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 11:02:19.842622  542642 cri.go:89] found id: "87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802"
	I1028 11:02:19.842654  542642 cri.go:89] found id: ""
	I1028 11:02:19.842666  542642 logs.go:282] 1 containers: [87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802]
	I1028 11:02:19.842744  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:19.846413  542642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 11:02:19.846489  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 11:02:19.882646  542642 cri.go:89] found id: "86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163"
	I1028 11:02:19.882680  542642 cri.go:89] found id: ""
	I1028 11:02:19.882692  542642 logs.go:282] 1 containers: [86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163]
	I1028 11:02:19.882743  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:19.886454  542642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 11:02:19.886519  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 11:02:19.922106  542642 cri.go:89] found id: "558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b"
	I1028 11:02:19.922128  542642 cri.go:89] found id: ""
	I1028 11:02:19.922138  542642 logs.go:282] 1 containers: [558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b]
	I1028 11:02:19.922194  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:19.925691  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 11:02:19.925763  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 11:02:19.960189  542642 cri.go:89] found id: "f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd"
	I1028 11:02:19.960212  542642 cri.go:89] found id: ""
	I1028 11:02:19.960219  542642 logs.go:282] 1 containers: [f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd]
	I1028 11:02:19.960264  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:19.963896  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 11:02:19.963963  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 11:02:19.999912  542642 cri.go:89] found id: "d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e"
	I1028 11:02:19.999937  542642 cri.go:89] found id: ""
	I1028 11:02:19.999945  542642 logs.go:282] 1 containers: [d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e]
	I1028 11:02:20.000005  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:20.003932  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 11:02:20.004014  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 11:02:20.042260  542642 cri.go:89] found id: "780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c"
	I1028 11:02:20.042289  542642 cri.go:89] found id: ""
	I1028 11:02:20.042298  542642 logs.go:282] 1 containers: [780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c]
	I1028 11:02:20.042353  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:20.046134  542642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 11:02:20.046197  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 11:02:20.082205  542642 cri.go:89] found id: "d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769"
	I1028 11:02:20.082236  542642 cri.go:89] found id: ""
	I1028 11:02:20.082246  542642 logs.go:282] 1 containers: [d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769]
	I1028 11:02:20.082305  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:20.086153  542642 logs.go:123] Gathering logs for container status ...
	I1028 11:02:20.086185  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 11:02:20.129314  542642 logs.go:123] Gathering logs for dmesg ...
	I1028 11:02:20.129353  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 11:02:20.143558  542642 logs.go:123] Gathering logs for describe nodes ...
	I1028 11:02:20.143595  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 11:02:20.246106  542642 logs.go:123] Gathering logs for etcd [86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163] ...
	I1028 11:02:20.246136  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163"
	I1028 11:02:20.293999  542642 logs.go:123] Gathering logs for coredns [558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b] ...
	I1028 11:02:20.294043  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b"
	I1028 11:02:20.332497  542642 logs.go:123] Gathering logs for kube-scheduler [f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd] ...
	I1028 11:02:20.332532  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd"
	I1028 11:02:20.373877  542642 logs.go:123] Gathering logs for kindnet [d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769] ...
	I1028 11:02:20.373915  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769"
	I1028 11:02:20.410599  542642 logs.go:123] Gathering logs for kubelet ...
	I1028 11:02:20.410631  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 11:02:20.468141  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.163425    1632 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-673472' and this object
	W1028 11:02:20.468316  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.163484    1632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	W1028 11:02:20.468439  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.164087    1632 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-673472' and this object
	W1028 11:02:20.468589  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.164135    1632 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	I1028 11:02:20.496348  542642 logs.go:123] Gathering logs for kube-apiserver [87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802] ...
	I1028 11:02:20.496384  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802"
	I1028 11:02:20.542990  542642 logs.go:123] Gathering logs for kube-proxy [d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e] ...
	I1028 11:02:20.543040  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e"
	I1028 11:02:20.578906  542642 logs.go:123] Gathering logs for kube-controller-manager [780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c] ...
	I1028 11:02:20.578944  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c"
	I1028 11:02:20.635286  542642 logs.go:123] Gathering logs for CRI-O ...
	I1028 11:02:20.635333  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 11:02:20.709226  542642 out.go:358] Setting ErrFile to fd 2...
	I1028 11:02:20.709267  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 11:02:20.709343  542642 out.go:270] X Problems detected in kubelet:
	W1028 11:02:20.709361  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.163425    1632 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-673472' and this object
	W1028 11:02:20.709376  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.163484    1632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	W1028 11:02:20.709390  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.164087    1632 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-673472' and this object
	W1028 11:02:20.709400  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.164135    1632 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	I1028 11:02:20.709412  542642 out.go:358] Setting ErrFile to fd 2...
	I1028 11:02:20.709422  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:02:30.710105  542642 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1028 11:02:30.715305  542642 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1028 11:02:30.716476  542642 api_server.go:141] control plane version: v1.31.2
	I1028 11:02:30.716507  542642 api_server.go:131] duration metric: took 10.908117119s to wait for apiserver health ...
	I1028 11:02:30.716516  542642 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:02:30.716544  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 11:02:30.716605  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 11:02:30.751826  542642 cri.go:89] found id: "87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802"
	I1028 11:02:30.751846  542642 cri.go:89] found id: ""
	I1028 11:02:30.751854  542642 logs.go:282] 1 containers: [87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802]
	I1028 11:02:30.751901  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:30.755324  542642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 11:02:30.755382  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 11:02:30.790166  542642 cri.go:89] found id: "86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163"
	I1028 11:02:30.790190  542642 cri.go:89] found id: ""
	I1028 11:02:30.790198  542642 logs.go:282] 1 containers: [86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163]
	I1028 11:02:30.790252  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:30.793841  542642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 11:02:30.793907  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 11:02:30.829656  542642 cri.go:89] found id: "558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b"
	I1028 11:02:30.829679  542642 cri.go:89] found id: ""
	I1028 11:02:30.829686  542642 logs.go:282] 1 containers: [558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b]
	I1028 11:02:30.829747  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:30.833316  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 11:02:30.833377  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 11:02:30.868057  542642 cri.go:89] found id: "f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd"
	I1028 11:02:30.868084  542642 cri.go:89] found id: ""
	I1028 11:02:30.868094  542642 logs.go:282] 1 containers: [f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd]
	I1028 11:02:30.868152  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:30.871825  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 11:02:30.871893  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 11:02:30.907336  542642 cri.go:89] found id: "d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e"
	I1028 11:02:30.907366  542642 cri.go:89] found id: ""
	I1028 11:02:30.907378  542642 logs.go:282] 1 containers: [d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e]
	I1028 11:02:30.907433  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:30.910919  542642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 11:02:30.910994  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 11:02:30.945929  542642 cri.go:89] found id: "780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c"
	I1028 11:02:30.945955  542642 cri.go:89] found id: ""
	I1028 11:02:30.945965  542642 logs.go:282] 1 containers: [780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c]
	I1028 11:02:30.946033  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:30.949613  542642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 11:02:30.949683  542642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 11:02:30.984758  542642 cri.go:89] found id: "d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769"
	I1028 11:02:30.984787  542642 cri.go:89] found id: ""
	I1028 11:02:30.984798  542642 logs.go:282] 1 containers: [d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769]
	I1028 11:02:30.984853  542642 ssh_runner.go:195] Run: which crictl
	I1028 11:02:30.988141  542642 logs.go:123] Gathering logs for kubelet ...
	I1028 11:02:30.988159  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 11:02:31.046313  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.163425    1632 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-673472' and this object
	W1028 11:02:31.046492  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.163484    1632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	W1028 11:02:31.046617  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.164087    1632 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-673472' and this object
	W1028 11:02:31.046768  542642 logs.go:138] Found kubelet problem: Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.164135    1632 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	I1028 11:02:31.075634  542642 logs.go:123] Gathering logs for kube-apiserver [87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802] ...
	I1028 11:02:31.075678  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802"
	I1028 11:02:31.123351  542642 logs.go:123] Gathering logs for etcd [86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163] ...
	I1028 11:02:31.123394  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163"
	I1028 11:02:31.171642  542642 logs.go:123] Gathering logs for coredns [558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b] ...
	I1028 11:02:31.171674  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b"
	I1028 11:02:31.209068  542642 logs.go:123] Gathering logs for kube-scheduler [f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd] ...
	I1028 11:02:31.209103  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd"
	I1028 11:02:31.250806  542642 logs.go:123] Gathering logs for CRI-O ...
	I1028 11:02:31.250852  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 11:02:31.326790  542642 logs.go:123] Gathering logs for dmesg ...
	I1028 11:02:31.326829  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 11:02:31.340598  542642 logs.go:123] Gathering logs for describe nodes ...
	I1028 11:02:31.340634  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 11:02:31.442790  542642 logs.go:123] Gathering logs for kube-proxy [d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e] ...
	I1028 11:02:31.442823  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e"
	I1028 11:02:31.477657  542642 logs.go:123] Gathering logs for kube-controller-manager [780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c] ...
	I1028 11:02:31.477689  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c"
	I1028 11:02:31.536097  542642 logs.go:123] Gathering logs for kindnet [d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769] ...
	I1028 11:02:31.536142  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769"
	I1028 11:02:31.573364  542642 logs.go:123] Gathering logs for container status ...
	I1028 11:02:31.573393  542642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 11:02:31.617567  542642 out.go:358] Setting ErrFile to fd 2...
	I1028 11:02:31.617600  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 11:02:31.617671  542642 out.go:270] X Problems detected in kubelet:
	W1028 11:02:31.617684  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.163425    1632 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-673472' and this object
	W1028 11:02:31.617691  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.163484    1632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	W1028 11:02:31.617700  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: W1028 11:00:48.164087    1632 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-673472" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-673472' and this object
	W1028 11:02:31.617707  542642 out.go:270]   Oct 28 11:00:48 addons-673472 kubelet[1632]: E1028 11:00:48.164135    1632 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-673472\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-673472' and this object" logger="UnhandledError"
	I1028 11:02:31.617714  542642 out.go:358] Setting ErrFile to fd 2...
	I1028 11:02:31.617721  542642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:02:41.627921  542642 system_pods.go:59] 19 kube-system pods found
	I1028 11:02:41.627970  542642 system_pods.go:61] "amd-gpu-device-plugin-rbj2l" [06398681-9fc4-40ad-bf57-1dfbcab84b18] Running
	I1028 11:02:41.627977  542642 system_pods.go:61] "coredns-7c65d6cfc9-67wn8" [cdf89129-7554-4b64-996d-010412cebe81] Running
	I1028 11:02:41.627981  542642 system_pods.go:61] "csi-hostpath-attacher-0" [98fb08da-880f-4a9b-ac30-b1088dc77ed4] Running
	I1028 11:02:41.627985  542642 system_pods.go:61] "csi-hostpath-resizer-0" [3ff39992-a5fa-4c23-b4e8-447516f86aa3] Running
	I1028 11:02:41.627989  542642 system_pods.go:61] "csi-hostpathplugin-bbjgv" [8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6] Running
	I1028 11:02:41.627993  542642 system_pods.go:61] "etcd-addons-673472" [b971d450-f424-4d0c-9ed4-36d27855789f] Running
	I1028 11:02:41.627998  542642 system_pods.go:61] "kindnet-v9f97" [7ee1e13b-0b02-4fa1-91d6-3024c746da7e] Running
	I1028 11:02:41.628003  542642 system_pods.go:61] "kube-apiserver-addons-673472" [b1474126-fd31-4595-a223-36b97b89c20b] Running
	I1028 11:02:41.628008  542642 system_pods.go:61] "kube-controller-manager-addons-673472" [45b96afd-51c3-41e8-8471-ace3e96aa9ab] Running
	I1028 11:02:41.628013  542642 system_pods.go:61] "kube-ingress-dns-minikube" [5c010972-8925-448a-8dbc-f653c352a411] Running
	I1028 11:02:41.628018  542642 system_pods.go:61] "kube-proxy-bx7gb" [33118a0f-5e5a-491e-92f3-adfac41fe8a7] Running
	I1028 11:02:41.628026  542642 system_pods.go:61] "kube-scheduler-addons-673472" [498d5407-0a87-4251-9439-e27f43eed34c] Running
	I1028 11:02:41.628033  542642 system_pods.go:61] "metrics-server-84c5f94fbc-wbsls" [49ebcec4-5d24-4e53-87da-1cbbff8ac5e9] Running
	I1028 11:02:41.628038  542642 system_pods.go:61] "nvidia-device-plugin-daemonset-zktff" [1db498a0-7243-4eed-9b71-4a44ffadbf48] Running
	I1028 11:02:41.628112  542642 system_pods.go:61] "registry-66c9cd494c-lmvk5" [bf3603f0-8ec8-43cc-b75c-299459db5001] Running
	I1028 11:02:41.628119  542642 system_pods.go:61] "registry-proxy-24mvc" [94641dc2-0fe0-44ee-8265-3d276479b3ff] Running
	I1028 11:02:41.628125  542642 system_pods.go:61] "snapshot-controller-56fcc65765-75jc2" [7755b682-0c95-4200-98e2-291af6055537] Running
	I1028 11:02:41.628131  542642 system_pods.go:61] "snapshot-controller-56fcc65765-7sj9h" [47fe8932-32e4-4b34-95f8-e6c4abe22b0f] Running
	I1028 11:02:41.628137  542642 system_pods.go:61] "storage-provisioner" [859db836-484d-4ce9-bb84-ae9a067e2f0d] Running
	I1028 11:02:41.628146  542642 system_pods.go:74] duration metric: took 10.911622468s to wait for pod list to return data ...
	I1028 11:02:41.628158  542642 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:02:41.632312  542642 default_sa.go:45] found service account: "default"
	I1028 11:02:41.632335  542642 default_sa.go:55] duration metric: took 4.168274ms for default service account to be created ...
	I1028 11:02:41.632345  542642 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:02:41.642194  542642 system_pods.go:86] 19 kube-system pods found
	I1028 11:02:41.642226  542642 system_pods.go:89] "amd-gpu-device-plugin-rbj2l" [06398681-9fc4-40ad-bf57-1dfbcab84b18] Running
	I1028 11:02:41.642233  542642 system_pods.go:89] "coredns-7c65d6cfc9-67wn8" [cdf89129-7554-4b64-996d-010412cebe81] Running
	I1028 11:02:41.642237  542642 system_pods.go:89] "csi-hostpath-attacher-0" [98fb08da-880f-4a9b-ac30-b1088dc77ed4] Running
	I1028 11:02:41.642241  542642 system_pods.go:89] "csi-hostpath-resizer-0" [3ff39992-a5fa-4c23-b4e8-447516f86aa3] Running
	I1028 11:02:41.642245  542642 system_pods.go:89] "csi-hostpathplugin-bbjgv" [8a10ff93-1e9e-4d53-8d54-8dd55b4f0ea6] Running
	I1028 11:02:41.642248  542642 system_pods.go:89] "etcd-addons-673472" [b971d450-f424-4d0c-9ed4-36d27855789f] Running
	I1028 11:02:41.642252  542642 system_pods.go:89] "kindnet-v9f97" [7ee1e13b-0b02-4fa1-91d6-3024c746da7e] Running
	I1028 11:02:41.642255  542642 system_pods.go:89] "kube-apiserver-addons-673472" [b1474126-fd31-4595-a223-36b97b89c20b] Running
	I1028 11:02:41.642259  542642 system_pods.go:89] "kube-controller-manager-addons-673472" [45b96afd-51c3-41e8-8471-ace3e96aa9ab] Running
	I1028 11:02:41.642264  542642 system_pods.go:89] "kube-ingress-dns-minikube" [5c010972-8925-448a-8dbc-f653c352a411] Running
	I1028 11:02:41.642267  542642 system_pods.go:89] "kube-proxy-bx7gb" [33118a0f-5e5a-491e-92f3-adfac41fe8a7] Running
	I1028 11:02:41.642270  542642 system_pods.go:89] "kube-scheduler-addons-673472" [498d5407-0a87-4251-9439-e27f43eed34c] Running
	I1028 11:02:41.642274  542642 system_pods.go:89] "metrics-server-84c5f94fbc-wbsls" [49ebcec4-5d24-4e53-87da-1cbbff8ac5e9] Running
	I1028 11:02:41.642279  542642 system_pods.go:89] "nvidia-device-plugin-daemonset-zktff" [1db498a0-7243-4eed-9b71-4a44ffadbf48] Running
	I1028 11:02:41.642285  542642 system_pods.go:89] "registry-66c9cd494c-lmvk5" [bf3603f0-8ec8-43cc-b75c-299459db5001] Running
	I1028 11:02:41.642288  542642 system_pods.go:89] "registry-proxy-24mvc" [94641dc2-0fe0-44ee-8265-3d276479b3ff] Running
	I1028 11:02:41.642292  542642 system_pods.go:89] "snapshot-controller-56fcc65765-75jc2" [7755b682-0c95-4200-98e2-291af6055537] Running
	I1028 11:02:41.642297  542642 system_pods.go:89] "snapshot-controller-56fcc65765-7sj9h" [47fe8932-32e4-4b34-95f8-e6c4abe22b0f] Running
	I1028 11:02:41.642301  542642 system_pods.go:89] "storage-provisioner" [859db836-484d-4ce9-bb84-ae9a067e2f0d] Running
	I1028 11:02:41.642311  542642 system_pods.go:126] duration metric: took 9.960953ms to wait for k8s-apps to be running ...
	I1028 11:02:41.642322  542642 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:02:41.642371  542642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:02:41.654652  542642 system_svc.go:56] duration metric: took 12.318102ms WaitForService to wait for kubelet
	I1028 11:02:41.654684  542642 kubeadm.go:582] duration metric: took 2m37.72560234s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:02:41.654707  542642 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:02:41.657943  542642 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1028 11:02:41.657974  542642 node_conditions.go:123] node cpu capacity is 8
	I1028 11:02:41.657988  542642 node_conditions.go:105] duration metric: took 3.276114ms to run NodePressure ...
	I1028 11:02:41.658001  542642 start.go:241] waiting for startup goroutines ...
	I1028 11:02:41.658007  542642 start.go:246] waiting for cluster config update ...
	I1028 11:02:41.658024  542642 start.go:255] writing updated cluster config ...
	I1028 11:02:41.658294  542642 ssh_runner.go:195] Run: rm -f paused
	I1028 11:02:41.711431  542642 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:02:41.713734  542642 out.go:177] * Done! kubectl is now configured to use "addons-673472" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 11:06:01 addons-673472 crio[1040]: time="2024-10-28 11:06:01.962547825Z" level=info msg="Closing host port tcp:80"
	Oct 28 11:06:01 addons-673472 crio[1040]: time="2024-10-28 11:06:01.962581208Z" level=info msg="Closing host port tcp:443"
	Oct 28 11:06:01 addons-673472 crio[1040]: time="2024-10-28 11:06:01.963843812Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 28 11:06:01 addons-673472 crio[1040]: time="2024-10-28 11:06:01.963864029Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 28 11:06:01 addons-673472 crio[1040]: time="2024-10-28 11:06:01.963996277Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-5f85ff4588-bxh4n Namespace:ingress-nginx ID:fbc804c52c72e9ba490fb6e4a534884efca2975a6f0989c9b9d231a305dd25ce UID:10fb8daa-361e-4300-91b8-5b1ed36dee87 NetNS:/var/run/netns/b948f91a-9ba4-444c-a18a-dad68f3ca602 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 28 11:06:01 addons-673472 crio[1040]: time="2024-10-28 11:06:01.964111145Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-5f85ff4588-bxh4n from CNI network \"kindnet\" (type=ptp)"
	Oct 28 11:06:01 addons-673472 crio[1040]: time="2024-10-28 11:06:01.998101900Z" level=info msg="Stopped pod sandbox: fbc804c52c72e9ba490fb6e4a534884efca2975a6f0989c9b9d231a305dd25ce" id=144368b8-0a4d-4fc6-b226-72b65e753d1c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 11:06:02 addons-673472 crio[1040]: time="2024-10-28 11:06:02.218440939Z" level=info msg="Removing container: 67c3976fe918a3c1501b2c5f5e81fc9966727f86296f4c935b8a6f3c75387aef" id=06e74871-3bf5-4d3a-92f4-e4ed99d4dafa name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 28 11:06:02 addons-673472 crio[1040]: time="2024-10-28 11:06:02.231317399Z" level=info msg="Removed container 67c3976fe918a3c1501b2c5f5e81fc9966727f86296f4c935b8a6f3c75387aef: ingress-nginx/ingress-nginx-controller-5f85ff4588-bxh4n/controller" id=06e74871-3bf5-4d3a-92f4-e4ed99d4dafa name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 28 11:06:58 addons-673472 crio[1040]: time="2024-10-28 11:06:58.985246528Z" level=info msg="Removing container: 310fbe2dabb94c3e090b27ef223fae20d78d7082f20fe613fc70b83d72446f04" id=8b01cc4b-c4b6-4e7f-947e-f1fe91fca40a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 28 11:06:59 addons-673472 crio[1040]: time="2024-10-28 11:06:59.002772067Z" level=info msg="Removed container 310fbe2dabb94c3e090b27ef223fae20d78d7082f20fe613fc70b83d72446f04: ingress-nginx/ingress-nginx-admission-patch-nd8pm/patch" id=8b01cc4b-c4b6-4e7f-947e-f1fe91fca40a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 28 11:06:59 addons-673472 crio[1040]: time="2024-10-28 11:06:59.004288651Z" level=info msg="Removing container: 2ab0e34f5bc41bf3e5c01be715daa1a2261a0db27a8940e215e1fda73787b31f" id=9dcf0a7b-8917-4ac7-a2cd-8b1ecbcc809e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 28 11:06:59 addons-673472 crio[1040]: time="2024-10-28 11:06:59.019538854Z" level=info msg="Removed container 2ab0e34f5bc41bf3e5c01be715daa1a2261a0db27a8940e215e1fda73787b31f: ingress-nginx/ingress-nginx-admission-create-zstdd/create" id=9dcf0a7b-8917-4ac7-a2cd-8b1ecbcc809e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 28 11:06:59 addons-673472 crio[1040]: time="2024-10-28 11:06:59.021300374Z" level=info msg="Stopping pod sandbox: ba771f66563ad42e11ae060fbcb98f0ebb37ea5c0d0ab6235c14484556855d65" id=625adf46-5c61-4ab6-9802-02808fbf480c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 11:06:59 addons-673472 crio[1040]: time="2024-10-28 11:06:59.021340419Z" level=info msg="Stopped pod sandbox (already stopped): ba771f66563ad42e11ae060fbcb98f0ebb37ea5c0d0ab6235c14484556855d65" id=625adf46-5c61-4ab6-9802-02808fbf480c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 11:06:59 addons-673472 crio[1040]: time="2024-10-28 11:06:59.021620923Z" level=info msg="Removing pod sandbox: ba771f66563ad42e11ae060fbcb98f0ebb37ea5c0d0ab6235c14484556855d65" id=7aa5c977-2e1f-445f-898e-d35ca1804fac name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 11:06:59 addons-673472 crio[1040]: time="2024-10-28 11:06:59.027873049Z" level=info msg="Removed pod sandbox: ba771f66563ad42e11ae060fbcb98f0ebb37ea5c0d0ab6235c14484556855d65" id=7aa5c977-2e1f-445f-898e-d35ca1804fac name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 11:06:59 addons-673472 crio[1040]: time="2024-10-28 11:06:59.028358828Z" level=info msg="Stopping pod sandbox: 9d68dc1c8354456f40429b593cb41f41a269d3f24916e4306fa8025f6c6230f3" id=1f145701-b521-418f-b57f-faf05558327f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 11:06:59 addons-673472 crio[1040]: time="2024-10-28 11:06:59.028401562Z" level=info msg="Stopped pod sandbox (already stopped): 9d68dc1c8354456f40429b593cb41f41a269d3f24916e4306fa8025f6c6230f3" id=1f145701-b521-418f-b57f-faf05558327f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 11:06:59 addons-673472 crio[1040]: time="2024-10-28 11:06:59.028770508Z" level=info msg="Removing pod sandbox: 9d68dc1c8354456f40429b593cb41f41a269d3f24916e4306fa8025f6c6230f3" id=62565e72-3e1f-4b92-9d78-136e65e7f419 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 11:06:59 addons-673472 crio[1040]: time="2024-10-28 11:06:59.036340668Z" level=info msg="Removed pod sandbox: 9d68dc1c8354456f40429b593cb41f41a269d3f24916e4306fa8025f6c6230f3" id=62565e72-3e1f-4b92-9d78-136e65e7f419 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 11:06:59 addons-673472 crio[1040]: time="2024-10-28 11:06:59.036836855Z" level=info msg="Stopping pod sandbox: fbc804c52c72e9ba490fb6e4a534884efca2975a6f0989c9b9d231a305dd25ce" id=e7b12b49-eecb-4ce7-849e-2fc9e3b03ca1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 11:06:59 addons-673472 crio[1040]: time="2024-10-28 11:06:59.036872713Z" level=info msg="Stopped pod sandbox (already stopped): fbc804c52c72e9ba490fb6e4a534884efca2975a6f0989c9b9d231a305dd25ce" id=e7b12b49-eecb-4ce7-849e-2fc9e3b03ca1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 28 11:06:59 addons-673472 crio[1040]: time="2024-10-28 11:06:59.037181729Z" level=info msg="Removing pod sandbox: fbc804c52c72e9ba490fb6e4a534884efca2975a6f0989c9b9d231a305dd25ce" id=f673761b-b99d-4f6e-9b2c-10a1d3266a16 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 28 11:06:59 addons-673472 crio[1040]: time="2024-10-28 11:06:59.043850917Z" level=info msg="Removed pod sandbox: fbc804c52c72e9ba490fb6e4a534884efca2975a6f0989c9b9d231a305dd25ce" id=f673761b-b99d-4f6e-9b2c-10a1d3266a16 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6804af3c3a6bb       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   0e8afd0447a9f       hello-world-app-55bf9c44b4-w7m2n
	2d2542cd954b8       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         5 minutes ago       Running             nginx                     0                   15a2f5ee37606       nginx
	8d1474a0966ca       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   66d9210ec05ce       busybox
	ef77ae889b5ef       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   8 minutes ago       Running             metrics-server            0                   b06593215f93a       metrics-server-84c5f94fbc-wbsls
	558c3bfb5f08c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        8 minutes ago       Running             coredns                   0                   9c922a06d5d22       coredns-7c65d6cfc9-67wn8
	9c5994b319418       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   3d6fb9799962e       storage-provisioner
	d7dc377c1ec14       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                                        9 minutes ago       Running             kindnet-cni               0                   9d45c24995558       kindnet-v9f97
	d696cc719e6ea       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        9 minutes ago       Running             kube-proxy                0                   bd99683fb696e       kube-proxy-bx7gb
	780a49bac595f       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        9 minutes ago       Running             kube-controller-manager   0                   f10ba1e222682       kube-controller-manager-addons-673472
	f2f6d4fe59b6a       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        9 minutes ago       Running             kube-scheduler            0                   368281cf760e0       kube-scheduler-addons-673472
	87d6522eeaa67       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        9 minutes ago       Running             kube-apiserver            0                   bd97ae32ee7e1       kube-apiserver-addons-673472
	86f61a9b0f576       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        9 minutes ago       Running             etcd                      0                   978b7489a4a8f       etcd-addons-673472
	
	
	==> coredns [558c3bfb5f08c36f8254ac554966ecae77b859c1892d28a297cb7435cc16512b] <==
	[INFO] 10.244.0.21:33757 - 56829 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006026177s
	[INFO] 10.244.0.21:37561 - 56808 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.0037699s
	[INFO] 10.244.0.21:39222 - 25325 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005053592s
	[INFO] 10.244.0.21:53143 - 55586 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005437036s
	[INFO] 10.244.0.21:41592 - 33536 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005200035s
	[INFO] 10.244.0.21:33757 - 59275 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003920786s
	[INFO] 10.244.0.21:36426 - 270 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005294867s
	[INFO] 10.244.0.21:58394 - 32533 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005396741s
	[INFO] 10.244.0.21:39233 - 34294 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00361794s
	[INFO] 10.244.0.21:36426 - 60175 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00510728s
	[INFO] 10.244.0.21:41592 - 60413 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005855469s
	[INFO] 10.244.0.21:53143 - 52411 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005606799s
	[INFO] 10.244.0.21:39233 - 37718 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005852641s
	[INFO] 10.244.0.21:37561 - 63316 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006629646s
	[INFO] 10.244.0.21:39222 - 35362 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006457741s
	[INFO] 10.244.0.21:58394 - 12429 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005695258s
	[INFO] 10.244.0.21:33757 - 61751 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006632466s
	[INFO] 10.244.0.21:39233 - 62702 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00008409s
	[INFO] 10.244.0.21:41592 - 22032 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075257s
	[INFO] 10.244.0.21:53143 - 32894 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000196039s
	[INFO] 10.244.0.21:36426 - 6539 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000162021s
	[INFO] 10.244.0.21:58394 - 8750 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074213s
	[INFO] 10.244.0.21:33757 - 20452 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000104324s
	[INFO] 10.244.0.21:39222 - 31941 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075714s
	[INFO] 10.244.0.21:37561 - 30007 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053115s
	
	
	==> describe nodes <==
	Name:               addons-673472
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-673472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=addons-673472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T10_59_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-673472
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 10:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-673472
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:09:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:06:05 +0000   Mon, 28 Oct 2024 10:59:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:06:05 +0000   Mon, 28 Oct 2024 10:59:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:06:05 +0000   Mon, 28 Oct 2024 10:59:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:06:05 +0000   Mon, 28 Oct 2024 11:00:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-673472
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 1fb705493fdf4a4695128d58fcd0c875
	  System UUID:                17eec836-98df-4a92-abb5-eb6145cff181
	  Boot ID:                    a5d554e2-50f9-4cf6-aaf5-eeaeea5ccf20
	  Kernel Version:             5.15.0-1070-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	  default                     hello-world-app-55bf9c44b4-w7m2n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 coredns-7c65d6cfc9-67wn8                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m14s
	  kube-system                 etcd-addons-673472                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m19s
	  kube-system                 kindnet-v9f97                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m14s
	  kube-system                 kube-apiserver-addons-673472             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-controller-manager-addons-673472    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-proxy-bx7gb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-scheduler-addons-673472             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 metrics-server-84c5f94fbc-wbsls          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         9m9s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m8s                   kube-proxy       
	  Normal   Starting                 9m24s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m24s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node addons-673472 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node addons-673472 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node addons-673472 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m19s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m19s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  9m19s                  kubelet          Node addons-673472 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m19s                  kubelet          Node addons-673472 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m19s                  kubelet          Node addons-673472 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m15s                  node-controller  Node addons-673472 event: Registered Node addons-673472 in Controller
	  Normal   NodeReady                8m29s                  kubelet          Node addons-673472 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000655] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000642] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000794] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000671] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000683] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000639] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.688659] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024607] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.032061] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.028193] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.324014] kauditd_printk_skb: 44 callbacks suppressed
	[Oct28 11:03] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 4b 23 1b 71 d1 36 90 bc 5b b5 cc 08 00
	[  +1.023428] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 4b 23 1b 71 d1 36 90 bc 5b b5 cc 08 00
	[  +2.019803] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 4b 23 1b 71 d1 36 90 bc 5b b5 cc 08 00
	[  +4.219728] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 4b 23 1b 71 d1 36 90 bc 5b b5 cc 08 00
	[  +8.191369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 4b 23 1b 71 d1 36 90 bc 5b b5 cc 08 00
	[Oct28 11:04] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 4b 23 1b 71 d1 36 90 bc 5b b5 cc 08 00
	[ +34.045529] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 4b 23 1b 71 d1 36 90 bc 5b b5 cc 08 00
	
	
	==> etcd [86f61a9b0f576ab97387af2123a08da049c1494a2b546709a0a71dd13cfa6163] <==
	{"level":"info","ts":"2024-10-28T11:00:07.308200Z","caller":"traceutil/trace.go:171","msg":"trace[932224814] transaction","detail":"{read_only:false; number_of_response:1; response_revision:432; }","duration":"100.728977ms","start":"2024-10-28T11:00:07.207448Z","end":"2024-10-28T11:00:07.308177Z","steps":["trace[932224814] 'process raft request'  (duration: 19.989254ms)","trace[932224814] 'compare'  (duration: 80.267981ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:00:07.606279Z","caller":"traceutil/trace.go:171","msg":"trace[1523903152] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"288.063987ms","start":"2024-10-28T11:00:07.318199Z","end":"2024-10-28T11:00:07.606263Z","steps":["trace[1523903152] 'process raft request'  (duration: 287.931241ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:00:07.616188Z","caller":"traceutil/trace.go:171","msg":"trace[1655175651] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"293.280924ms","start":"2024-10-28T11:00:07.322890Z","end":"2024-10-28T11:00:07.616171Z","steps":["trace[1655175651] 'process raft request'  (duration: 293.193728ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:00:07.616285Z","caller":"traceutil/trace.go:171","msg":"trace[1843020188] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"296.477771ms","start":"2024-10-28T11:00:07.319791Z","end":"2024-10-28T11:00:07.616269Z","steps":["trace[1843020188] 'process raft request'  (duration: 288.388544ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:00:07.616381Z","caller":"traceutil/trace.go:171","msg":"trace[694769862] transaction","detail":"{read_only:false; number_of_response:1; response_revision:438; }","duration":"296.388847ms","start":"2024-10-28T11:00:07.319982Z","end":"2024-10-28T11:00:07.616371Z","steps":["trace[694769862] 'process raft request'  (duration: 296.037145ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:01:26.582734Z","caller":"traceutil/trace.go:171","msg":"trace[1226832532] linearizableReadLoop","detail":"{readStateIndex:1177; appliedIndex:1176; }","duration":"155.4859ms","start":"2024-10-28T11:01:26.427218Z","end":"2024-10-28T11:01:26.582704Z","steps":["trace[1226832532] 'read index received'  (duration: 155.331163ms)","trace[1226832532] 'applied index is now lower than readState.Index'  (duration: 153.908µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:01:26.582775Z","caller":"traceutil/trace.go:171","msg":"trace[1334436647] transaction","detail":"{read_only:false; response_revision:1145; number_of_response:1; }","duration":"161.235746ms","start":"2024-10-28T11:01:26.421518Z","end":"2024-10-28T11:01:26.582753Z","steps":["trace[1334436647] 'process raft request'  (duration: 161.031275ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:01:26.582925Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.68281ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-673472\" ","response":"range_response_count:1 size:5985"}
	{"level":"info","ts":"2024-10-28T11:01:26.582970Z","caller":"traceutil/trace.go:171","msg":"trace[1065955371] range","detail":"{range_begin:/registry/minions/addons-673472; range_end:; response_count:1; response_revision:1145; }","duration":"155.738798ms","start":"2024-10-28T11:01:26.427214Z","end":"2024-10-28T11:01:26.582952Z","steps":["trace[1065955371] 'agreement among raft nodes before linearized reading'  (duration: 155.58609ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:01:26.593230Z","caller":"traceutil/trace.go:171","msg":"trace[184068508] transaction","detail":"{read_only:false; response_revision:1146; number_of_response:1; }","duration":"165.853483ms","start":"2024-10-28T11:01:26.427356Z","end":"2024-10-28T11:01:26.593209Z","steps":["trace[184068508] 'process raft request'  (duration: 165.74865ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:01:26.805948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.281813ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032861497871490 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/gcp-auth/gcp-auth\" mod_revision:771 > success:<request_put:<key:\"/registry/services/endpoints/gcp-auth/gcp-auth\" value_size:499 >> failure:<request_range:<key:\"/registry/services/endpoints/gcp-auth/gcp-auth\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-28T11:01:26.806178Z","caller":"traceutil/trace.go:171","msg":"trace[593227183] linearizableReadLoop","detail":"{readStateIndex:1181; appliedIndex:1178; }","duration":"172.42603ms","start":"2024-10-28T11:01:26.633742Z","end":"2024-10-28T11:01:26.806168Z","steps":["trace[593227183] 'read index received'  (duration: 20.88635ms)","trace[593227183] 'applied index is now lower than readState.Index'  (duration: 151.539015ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:01:26.806181Z","caller":"traceutil/trace.go:171","msg":"trace[1484974083] transaction","detail":"{read_only:false; response_revision:1147; number_of_response:1; }","duration":"219.073952ms","start":"2024-10-28T11:01:26.587090Z","end":"2024-10-28T11:01:26.806164Z","steps":["trace[1484974083] 'process raft request'  (duration: 67.4872ms)","trace[1484974083] 'compare'  (duration: 151.158811ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:01:26.806204Z","caller":"traceutil/trace.go:171","msg":"trace[771625790] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"219.046453ms","start":"2024-10-28T11:01:26.587136Z","end":"2024-10-28T11:01:26.806182Z","steps":["trace[771625790] 'process raft request'  (duration: 218.913621ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:01:26.806336Z","caller":"traceutil/trace.go:171","msg":"trace[1779034871] transaction","detail":"{read_only:false; response_revision:1149; number_of_response:1; }","duration":"218.016967ms","start":"2024-10-28T11:01:26.588311Z","end":"2024-10-28T11:01:26.806328Z","steps":["trace[1779034871] 'process raft request'  (duration: 217.806219ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:01:26.806389Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.647938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:01:26.806432Z","caller":"traceutil/trace.go:171","msg":"trace[1662475941] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1149; }","duration":"172.698955ms","start":"2024-10-28T11:01:26.633724Z","end":"2024-10-28T11:01:26.806423Z","steps":["trace[1662475941] 'agreement among raft nodes before linearized reading'  (duration: 172.626372ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:01:32.506091Z","caller":"traceutil/trace.go:171","msg":"trace[94474605] linearizableReadLoop","detail":"{readStateIndex:1210; appliedIndex:1209; }","duration":"129.568034ms","start":"2024-10-28T11:01:32.376498Z","end":"2024-10-28T11:01:32.506066Z","steps":["trace[94474605] 'read index received'  (duration: 61.82986ms)","trace[94474605] 'applied index is now lower than readState.Index'  (duration: 67.737349ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:01:32.506173Z","caller":"traceutil/trace.go:171","msg":"trace[17385449] transaction","detail":"{read_only:false; response_revision:1177; number_of_response:1; }","duration":"129.836214ms","start":"2024-10-28T11:01:32.376281Z","end":"2024-10-28T11:01:32.506117Z","steps":["trace[17385449] 'process raft request'  (duration: 62.09369ms)","trace[17385449] 'compare'  (duration: 67.584312ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T11:01:32.506275Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.750774ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-673472\" ","response":"range_response_count:1 size:5985"}
	{"level":"info","ts":"2024-10-28T11:01:32.506309Z","caller":"traceutil/trace.go:171","msg":"trace[936494536] range","detail":"{range_begin:/registry/minions/addons-673472; range_end:; response_count:1; response_revision:1177; }","duration":"129.806601ms","start":"2024-10-28T11:01:32.376493Z","end":"2024-10-28T11:01:32.506300Z","steps":["trace[936494536] 'agreement among raft nodes before linearized reading'  (duration: 129.670029ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:01:37.979190Z","caller":"traceutil/trace.go:171","msg":"trace[11625262] transaction","detail":"{read_only:false; response_revision:1205; number_of_response:1; }","duration":"127.539135ms","start":"2024-10-28T11:01:37.851633Z","end":"2024-10-28T11:01:37.979173Z","steps":["trace[11625262] 'process raft request'  (duration: 62.581026ms)","trace[11625262] 'compare'  (duration: 64.73073ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:01:37.979177Z","caller":"traceutil/trace.go:171","msg":"trace[1437064814] linearizableReadLoop","detail":"{readStateIndex:1242; appliedIndex:1241; }","duration":"125.1034ms","start":"2024-10-28T11:01:37.854046Z","end":"2024-10-28T11:01:37.979149Z","steps":["trace[1437064814] 'read index received'  (duration: 60.135963ms)","trace[1437064814] 'applied index is now lower than readState.Index'  (duration: 64.966272ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T11:01:37.979346Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.283733ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-673472\" ","response":"range_response_count:1 size:5985"}
	{"level":"info","ts":"2024-10-28T11:01:37.979381Z","caller":"traceutil/trace.go:171","msg":"trace[312256287] range","detail":"{range_begin:/registry/minions/addons-673472; range_end:; response_count:1; response_revision:1205; }","duration":"125.336366ms","start":"2024-10-28T11:01:37.854035Z","end":"2024-10-28T11:01:37.979371Z","steps":["trace[312256287] 'agreement among raft nodes before linearized reading'  (duration: 125.165792ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:09:17 up  2:51,  0 users,  load average: 0.23, 5.59, 45.70
	Linux addons-673472 5.15.0-1070-gcp #78~20.04.1-Ubuntu SMP Wed Oct 9 22:05:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d7dc377c1ec143c52a5c44b63516a30f0c70334b070cb431b5ac6ccb34f79769] <==
	I1028 11:07:07.907149       1 main.go:300] handling current node
	I1028 11:07:17.913613       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:07:17.913648       1 main.go:300] handling current node
	I1028 11:07:27.912834       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:07:27.912882       1 main.go:300] handling current node
	I1028 11:07:37.915978       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:07:37.916018       1 main.go:300] handling current node
	I1028 11:07:47.912841       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:07:47.912889       1 main.go:300] handling current node
	I1028 11:07:57.910525       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:07:57.910565       1 main.go:300] handling current node
	I1028 11:08:07.906537       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:08:07.906574       1 main.go:300] handling current node
	I1028 11:08:17.914293       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:08:17.914334       1 main.go:300] handling current node
	I1028 11:08:27.908364       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:08:27.908402       1 main.go:300] handling current node
	I1028 11:08:37.908829       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:08:37.908890       1 main.go:300] handling current node
	I1028 11:08:47.913368       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:08:47.913410       1 main.go:300] handling current node
	I1028 11:08:57.909097       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:08:57.909148       1 main.go:300] handling current node
	I1028 11:09:07.906741       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1028 11:09:07.906795       1 main.go:300] handling current node
	
	
	==> kube-apiserver [87d6522eeaa6770d3fb01cbd3a25ea3cbb5e1faae498a59c9b60b94781bd2802] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 11:02:13.857502       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1028 11:02:52.435604       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52148: use of closed network connection
	E1028 11:02:52.605932       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52174: use of closed network connection
	I1028 11:03:01.639658       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.129.166"}
	I1028 11:03:30.265863       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E1028 11:03:31.212447       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	W1028 11:03:31.321002       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1028 11:03:35.765401       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1028 11:03:35.958723       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.26.74"}
	I1028 11:03:38.907880       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1028 11:03:55.695720       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:03:55.695782       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 11:03:55.728888       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:03:55.729009       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 11:03:55.818778       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:03:55.818937       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 11:03:55.829159       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:03:55.829713       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1028 11:03:56.819964       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1028 11:03:56.834810       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1028 11:03:56.905657       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1028 11:05:54.814712       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.236.91"}
	
	
	==> kube-controller-manager [780a49bac595fe5a7b5668dac5a9e52eb6f3981ee3deb78bf4e050cfd3a09f5c] <==
	E1028 11:06:46.636521       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:07:12.442212       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:07:12.442260       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:07:18.408391       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:07:18.408448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:07:27.086225       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:07:27.086278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:07:40.549892       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:07:40.549946       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:07:51.432263       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:07:51.432313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:08:13.584011       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:08:13.584060       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:08:14.901019       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:08:14.901077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:08:22.115064       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:08:22.115112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:08:46.324848       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:08:46.324892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:08:47.261651       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:08:47.261700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:08:57.699820       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:08:57.699877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:09:10.511401       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:09:10.511450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [d696cc719e6ead159265aa1813a4fb52da93430b7832e0ec7a099fa604a8f81e] <==
	I1028 11:00:06.428936       1 server_linux.go:66] "Using iptables proxy"
	I1028 11:00:07.908221       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1028 11:00:07.908479       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:00:08.508253       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1028 11:00:08.508356       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:00:08.514904       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:00:08.517596       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:00:08.517984       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:00:08.520203       1 config.go:199] "Starting service config controller"
	I1028 11:00:08.521561       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:00:08.520704       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:00:08.521683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:00:08.521278       1 config.go:328] "Starting node config controller"
	I1028 11:00:08.521742       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:00:08.622392       1 shared_informer.go:320] Caches are synced for node config
	I1028 11:00:08.622394       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 11:00:08.622401       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [f2f6d4fe59b6ac265c774da59e3b2fcae412d8a1253e78e4708fd194dbcf3ecd] <==
	W1028 10:59:56.227184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1028 10:59:56.227218       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 10:59:56.227230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1028 10:59:56.227324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.040202       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 10:59:57.040247       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.070694       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 10:59:57.070752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.073947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 10:59:57.073989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.122466       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 10:59:57.122513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.250605       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 10:59:57.250651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.255021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 10:59:57.255063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.297855       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 10:59:57.297912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.342782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 10:59:57.342833       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.398440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 10:59:57.398487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:59:57.605853       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 10:59:57.605895       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1028 10:59:59.624074       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 11:07:18 addons-673472 kubelet[1632]: E1028 11:07:18.971390    1632 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113638971050435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:07:28 addons-673472 kubelet[1632]: E1028 11:07:28.974529    1632 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113648974225757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:07:28 addons-673472 kubelet[1632]: E1028 11:07:28.974578    1632 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113648974225757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:07:38 addons-673472 kubelet[1632]: E1028 11:07:38.978687    1632 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113658977463366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:07:38 addons-673472 kubelet[1632]: E1028 11:07:38.978731    1632 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113658977463366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:07:48 addons-673472 kubelet[1632]: E1028 11:07:48.980909    1632 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113668980570235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:07:48 addons-673472 kubelet[1632]: E1028 11:07:48.980949    1632 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113668980570235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:07:53 addons-673472 kubelet[1632]: I1028 11:07:53.715468    1632 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 28 11:07:58 addons-673472 kubelet[1632]: E1028 11:07:58.984135    1632 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113678983856065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:07:58 addons-673472 kubelet[1632]: E1028 11:07:58.984179    1632 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113678983856065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:08:08 addons-673472 kubelet[1632]: E1028 11:08:08.986875    1632 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113688986571489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:08:08 addons-673472 kubelet[1632]: E1028 11:08:08.986914    1632 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113688986571489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:08:18 addons-673472 kubelet[1632]: E1028 11:08:18.990152    1632 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113698989847328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:08:18 addons-673472 kubelet[1632]: E1028 11:08:18.990199    1632 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113698989847328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:08:28 addons-673472 kubelet[1632]: E1028 11:08:28.992876    1632 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113708992601027,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:08:28 addons-673472 kubelet[1632]: E1028 11:08:28.992910    1632 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113708992601027,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:08:38 addons-673472 kubelet[1632]: E1028 11:08:38.995822    1632 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113718995454680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:08:38 addons-673472 kubelet[1632]: E1028 11:08:38.995869    1632 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113718995454680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:08:48 addons-673472 kubelet[1632]: E1028 11:08:48.998113    1632 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113728997838516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:08:48 addons-673472 kubelet[1632]: E1028 11:08:48.998149    1632 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113728997838516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:08:56 addons-673472 kubelet[1632]: I1028 11:08:56.715538    1632 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 28 11:08:59 addons-673472 kubelet[1632]: E1028 11:08:59.002166    1632 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113739001806626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:08:59 addons-673472 kubelet[1632]: E1028 11:08:59.002208    1632 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113739001806626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:09:09 addons-673472 kubelet[1632]: E1028 11:09:09.004680    1632 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113749004399343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:09:09 addons-673472 kubelet[1632]: E1028 11:09:09.004727    1632 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113749004399343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608413,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [9c5994b319418ea2b9da3599b93024a16ec2b2a2060f1eb06019e311d4b3e36a] <==
	I1028 11:00:49.120426       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 11:00:49.128116       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 11:00:49.128193       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 11:00:49.138757       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 11:00:49.138904       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fbaeb622-2a3a-47c5-8672-b3e4cec045b1", APIVersion:"v1", ResourceVersion:"931", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-673472_d714ab4c-d27b-4db4-80f2-b1df72977db8 became leader
	I1028 11:00:49.138919       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-673472_d714ab4c-d27b-4db4-80f2-b1df72977db8!
	I1028 11:00:49.240010       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-673472_d714ab4c-d27b-4db4-80f2-b1df72977db8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-673472 -n addons-673472
helpers_test.go:261: (dbg) Run:  kubectl --context addons-673472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (361.92s)

                                                
                                    

Test pass (302/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.36
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.2/json-events 5.82
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.22
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1.13
21 TestBinaryMirror 0.78
22 TestOffline 88.39
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 201.01
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 10.48
35 TestAddons/parallel/Registry 15.64
37 TestAddons/parallel/InspektorGadget 10.75
40 TestAddons/parallel/CSI 44.03
41 TestAddons/parallel/Headlamp 17.6
42 TestAddons/parallel/CloudSpanner 5.59
43 TestAddons/parallel/LocalPath 50.96
44 TestAddons/parallel/NvidiaDevicePlugin 6.52
45 TestAddons/parallel/Yakd 11.72
46 TestAddons/parallel/AmdGpuDevicePlugin 6.51
47 TestAddons/StoppedEnableDisable 12.11
48 TestCertOptions 28.1
49 TestCertExpiration 222.84
51 TestForceSystemdFlag 28.74
52 TestForceSystemdEnv 23.36
54 TestKVMDriverInstallOrUpdate 1.22
58 TestErrorSpam/setup 23.2
59 TestErrorSpam/start 0.61
60 TestErrorSpam/status 0.89
61 TestErrorSpam/pause 1.52
62 TestErrorSpam/unpause 1.69
63 TestErrorSpam/stop 1.37
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 41.13
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 27.14
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.9
75 TestFunctional/serial/CacheCmd/cache/add_local 1.03
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 28.93
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.47
86 TestFunctional/serial/LogsFileCmd 1.43
87 TestFunctional/serial/InvalidService 4.1
89 TestFunctional/parallel/ConfigCmd 0.41
90 TestFunctional/parallel/DashboardCmd 7.51
91 TestFunctional/parallel/DryRun 0.42
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 0.91
97 TestFunctional/parallel/ServiceCmdConnect 8.84
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 29.8
101 TestFunctional/parallel/SSHCmd 0.69
102 TestFunctional/parallel/CpCmd 2
103 TestFunctional/parallel/MySQL 23.06
104 TestFunctional/parallel/FileSync 0.34
105 TestFunctional/parallel/CertSync 1.95
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
113 TestFunctional/parallel/License 0.19
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.71
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 18.23
120 TestFunctional/parallel/ServiceCmd/List 0.58
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
123 TestFunctional/parallel/ServiceCmd/Format 0.35
124 TestFunctional/parallel/ServiceCmd/URL 0.49
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
132 TestFunctional/parallel/MountCmd/any-port 7.78
133 TestFunctional/parallel/ProfileCmd/profile_list 0.37
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
138 TestFunctional/parallel/Version/short 0.08
139 TestFunctional/parallel/Version/components 0.56
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
144 TestFunctional/parallel/ImageCommands/ImageBuild 2.62
145 TestFunctional/parallel/ImageCommands/Setup 0.4
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.53
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.03
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
150 TestFunctional/parallel/ImageCommands/ImageRemove 2.21
151 TestFunctional/parallel/MountCmd/specific-port 2.07
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.13
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.96
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 153.21
162 TestMultiControlPlane/serial/DeployApp 8.39
163 TestMultiControlPlane/serial/PingHostFromPods 1.11
164 TestMultiControlPlane/serial/AddWorkerNode 33.86
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
167 TestMultiControlPlane/serial/CopyFile 15.97
168 TestMultiControlPlane/serial/StopSecondaryNode 12.54
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
170 TestMultiControlPlane/serial/RestartSecondaryNode 19.83
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 218.49
173 TestMultiControlPlane/serial/DeleteSecondaryNode 11.43
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
175 TestMultiControlPlane/serial/StopCluster 35.63
176 TestMultiControlPlane/serial/RestartCluster 58.47
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
178 TestMultiControlPlane/serial/AddSecondaryNode 64.36
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
183 TestJSONOutput/start/Command 38.21
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.69
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.61
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.75
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
208 TestKicCustomNetwork/create_custom_network 30.15
209 TestKicCustomNetwork/use_default_bridge_network 23.18
210 TestKicExistingNetwork 25.99
211 TestKicCustomSubnet 23.31
212 TestKicStaticIP 26.76
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 50.84
217 TestMountStart/serial/StartWithMountFirst 8.49
218 TestMountStart/serial/VerifyMountFirst 0.24
219 TestMountStart/serial/StartWithMountSecond 5.59
220 TestMountStart/serial/VerifyMountSecond 0.26
221 TestMountStart/serial/DeleteFirst 1.6
222 TestMountStart/serial/VerifyMountPostDelete 0.25
223 TestMountStart/serial/Stop 1.18
224 TestMountStart/serial/RestartStopped 7.18
225 TestMountStart/serial/VerifyMountPostStop 0.25
228 TestMultiNode/serial/FreshStart2Nodes 67.32
229 TestMultiNode/serial/DeployApp2Nodes 5.86
230 TestMultiNode/serial/PingHostFrom2Pods 0.77
231 TestMultiNode/serial/AddNode 24.86
232 TestMultiNode/serial/MultiNodeLabels 0.07
233 TestMultiNode/serial/ProfileList 0.62
234 TestMultiNode/serial/CopyFile 9.09
235 TestMultiNode/serial/StopNode 2.11
236 TestMultiNode/serial/StartAfterStop 9.06
237 TestMultiNode/serial/RestartKeepsNodes 79.24
238 TestMultiNode/serial/DeleteNode 4.98
239 TestMultiNode/serial/StopMultiNode 23.79
240 TestMultiNode/serial/RestartMultiNode 57.34
241 TestMultiNode/serial/ValidateNameConflict 25.91
246 TestPreload 106.65
248 TestScheduledStopUnix 100.21
251 TestInsufficientStorage 12.87
252 TestRunningBinaryUpgrade 127.6
254 TestKubernetesUpgrade 362.64
255 TestMissingContainerUpgrade 94.67
256 TestStoppedBinaryUpgrade/Setup 0.42
257 TestStoppedBinaryUpgrade/Upgrade 109
266 TestPause/serial/Start 46.9
267 TestStoppedBinaryUpgrade/MinikubeLogs 1
269 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
270 TestNoKubernetes/serial/StartWithK8s 25.76
278 TestNetworkPlugins/group/false 3.89
282 TestPause/serial/SecondStartNoReconfiguration 28.98
283 TestNoKubernetes/serial/StartWithStopK8s 8.49
284 TestNoKubernetes/serial/Start 7.84
285 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
286 TestNoKubernetes/serial/ProfileList 4.55
287 TestNoKubernetes/serial/Stop 1.22
288 TestNoKubernetes/serial/StartNoArgs 8.76
289 TestPause/serial/Pause 0.86
290 TestPause/serial/VerifyStatus 0.32
291 TestPause/serial/Unpause 0.7
292 TestPause/serial/PauseAgain 0.82
293 TestPause/serial/DeletePaused 3.44
294 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
295 TestPause/serial/VerifyDeletedResources 2.46
297 TestStartStop/group/old-k8s-version/serial/FirstStart 133.13
299 TestStartStop/group/no-preload/serial/FirstStart 54.83
300 TestStartStop/group/old-k8s-version/serial/DeployApp 9.41
301 TestStartStop/group/no-preload/serial/DeployApp 10.26
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
303 TestStartStop/group/old-k8s-version/serial/Stop 11.86
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.94
305 TestStartStop/group/no-preload/serial/Stop 14.43
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/old-k8s-version/serial/SecondStart 149.34
309 TestStartStop/group/embed-certs/serial/FirstStart 73.77
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
311 TestStartStop/group/no-preload/serial/SecondStart 263.21
313 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.38
314 TestStartStop/group/embed-certs/serial/DeployApp 9.27
315 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
317 TestStartStop/group/embed-certs/serial/Stop 11.95
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.83
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
321 TestStartStop/group/embed-certs/serial/SecondStart 262.79
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
323 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.26
324 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
326 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
327 TestStartStop/group/old-k8s-version/serial/Pause 2.63
329 TestStartStop/group/newest-cni/serial/FirstStart 26.45
330 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.82
332 TestStartStop/group/newest-cni/serial/Stop 1.2
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
334 TestStartStop/group/newest-cni/serial/SecondStart 12.95
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
338 TestStartStop/group/newest-cni/serial/Pause 2.78
339 TestNetworkPlugins/group/auto/Start 42.27
340 TestNetworkPlugins/group/auto/KubeletFlags 0.27
341 TestNetworkPlugins/group/auto/NetCatPod 8.19
342 TestNetworkPlugins/group/auto/DNS 0.13
343 TestNetworkPlugins/group/auto/Localhost 0.12
344 TestNetworkPlugins/group/auto/HairPin 0.12
345 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
347 TestNetworkPlugins/group/kindnet/Start 75.25
348 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
349 TestStartStop/group/no-preload/serial/Pause 3.09
350 TestNetworkPlugins/group/calico/Start 51.03
351 TestNetworkPlugins/group/calico/ControllerPod 6.01
352 TestNetworkPlugins/group/calico/KubeletFlags 0.26
353 TestNetworkPlugins/group/calico/NetCatPod 11.19
354 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
355 TestNetworkPlugins/group/calico/DNS 0.13
356 TestNetworkPlugins/group/calico/Localhost 0.11
357 TestNetworkPlugins/group/calico/HairPin 0.11
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
359 TestNetworkPlugins/group/kindnet/NetCatPod 10.2
360 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
361 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
363 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
364 TestNetworkPlugins/group/kindnet/DNS 0.14
365 TestNetworkPlugins/group/kindnet/Localhost 0.12
366 TestNetworkPlugins/group/kindnet/HairPin 0.12
367 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
368 TestStartStop/group/embed-certs/serial/Pause 2.99
369 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
370 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.29
371 TestNetworkPlugins/group/custom-flannel/Start 53.23
372 TestNetworkPlugins/group/enable-default-cni/Start 75.45
373 TestNetworkPlugins/group/flannel/Start 53.52
374 TestNetworkPlugins/group/bridge/Start 66.67
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.18
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/custom-flannel/DNS 0.13
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
382 TestNetworkPlugins/group/flannel/NetCatPod 10.2
383 TestNetworkPlugins/group/flannel/DNS 0.16
384 TestNetworkPlugins/group/flannel/Localhost 0.12
385 TestNetworkPlugins/group/flannel/HairPin 0.12
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.2
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
389 TestNetworkPlugins/group/bridge/NetCatPod 9.19
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
393 TestNetworkPlugins/group/bridge/DNS 0.14
394 TestNetworkPlugins/group/bridge/Localhost 0.12
395 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (6.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-434419 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-434419 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.355537493s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1028 10:59:11.771407  541347 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1028 10:59:11.771514  541347 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-533928/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-434419
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-434419: exit status 85 (69.680601ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-434419 | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC |          |
	|         | -p download-only-434419        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 10:59:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 10:59:05.462204  541358 out.go:345] Setting OutFile to fd 1 ...
	I1028 10:59:05.462360  541358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:59:05.462369  541358 out.go:358] Setting ErrFile to fd 2...
	I1028 10:59:05.462374  541358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:59:05.462546  541358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
	W1028 10:59:05.462704  541358 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19876-533928/.minikube/config/config.json: open /home/jenkins/minikube-integration/19876-533928/.minikube/config/config.json: no such file or directory
	I1028 10:59:05.463502  541358 out.go:352] Setting JSON to true
	I1028 10:59:05.465322  541358 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9689,"bootTime":1730103456,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 10:59:05.465445  541358 start.go:139] virtualization: kvm guest
	I1028 10:59:05.468060  541358 out.go:97] [download-only-434419] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1028 10:59:05.468211  541358 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19876-533928/.minikube/cache/preloaded-tarball: no such file or directory
	I1028 10:59:05.468292  541358 notify.go:220] Checking for updates...
	I1028 10:59:05.470068  541358 out.go:169] MINIKUBE_LOCATION=19876
	I1028 10:59:05.471630  541358 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 10:59:05.472936  541358 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19876-533928/kubeconfig
	I1028 10:59:05.474169  541358 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-533928/.minikube
	I1028 10:59:05.475417  541358 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1028 10:59:05.477776  541358 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 10:59:05.478031  541358 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:59:05.500692  541358 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 10:59:05.500815  541358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:59:05.866492  541358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-28 10:59:05.856795442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 10:59:05.866609  541358 docker.go:318] overlay module found
	I1028 10:59:05.868543  541358 out.go:97] Using the docker driver based on user configuration
	I1028 10:59:05.868577  541358 start.go:297] selected driver: docker
	I1028 10:59:05.868584  541358 start.go:901] validating driver "docker" against <nil>
	I1028 10:59:05.868690  541358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:59:05.918774  541358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-28 10:59:05.909460788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 10:59:05.918969  541358 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 10:59:05.919559  541358 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1028 10:59:05.919757  541358 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 10:59:05.921770  541358 out.go:169] Using Docker driver with root privileges
	I1028 10:59:05.923006  541358 cni.go:84] Creating CNI manager for ""
	I1028 10:59:05.923086  541358 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 10:59:05.923099  541358 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 10:59:05.923181  541358 start.go:340] cluster config:
	{Name:download-only-434419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-434419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:59:05.924606  541358 out.go:97] Starting "download-only-434419" primary control-plane node in "download-only-434419" cluster
	I1028 10:59:05.924630  541358 cache.go:121] Beginning downloading kic base image for docker with crio
	I1028 10:59:05.925893  541358 out.go:97] Pulling base image v0.0.45-1729876044-19868 ...
	I1028 10:59:05.925921  541358 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 10:59:05.926043  541358 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1028 10:59:05.942455  541358 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e to local cache
	I1028 10:59:05.942638  541358 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local cache directory
	I1028 10:59:05.942729  541358 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e to local cache
	I1028 10:59:05.950417  541358 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1028 10:59:05.950442  541358 cache.go:56] Caching tarball of preloaded images
	I1028 10:59:05.950565  541358 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 10:59:05.952470  541358 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1028 10:59:05.952495  541358 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1028 10:59:05.978168  541358 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19876-533928/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1028 10:59:09.468537  541358 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1028 10:59:09.468642  541358 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19876-533928/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-434419 host does not exist
	  To start a cluster, run: "minikube start -p download-only-434419"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-434419
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (5.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-337919 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-337919 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.817036309s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (5.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1028 10:59:18.014738  541347 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1028 10:59:18.014782  541347 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-533928/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-337919
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-337919: exit status 85 (69.589716ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-434419 | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC |                     |
	|         | -p download-only-434419        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC | 28 Oct 24 10:59 UTC |
	| delete  | -p download-only-434419        | download-only-434419 | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC | 28 Oct 24 10:59 UTC |
	| start   | -o=json --download-only        | download-only-337919 | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC |                     |
	|         | -p download-only-337919        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 10:59:12
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 10:59:12.242490  541701 out.go:345] Setting OutFile to fd 1 ...
	I1028 10:59:12.242644  541701 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:59:12.242655  541701 out.go:358] Setting ErrFile to fd 2...
	I1028 10:59:12.242660  541701 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:59:12.242860  541701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
	I1028 10:59:12.243480  541701 out.go:352] Setting JSON to true
	I1028 10:59:12.244391  541701 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9696,"bootTime":1730103456,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 10:59:12.244520  541701 start.go:139] virtualization: kvm guest
	I1028 10:59:12.246472  541701 out.go:97] [download-only-337919] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 10:59:12.246657  541701 notify.go:220] Checking for updates...
	I1028 10:59:12.248232  541701 out.go:169] MINIKUBE_LOCATION=19876
	I1028 10:59:12.249870  541701 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 10:59:12.251343  541701 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19876-533928/kubeconfig
	I1028 10:59:12.252879  541701 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-533928/.minikube
	I1028 10:59:12.254158  541701 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1028 10:59:12.256815  541701 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 10:59:12.257112  541701 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:59:12.279158  541701 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 10:59:12.279247  541701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:59:12.326536  541701 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-28 10:59:12.317250883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 10:59:12.326679  541701 docker.go:318] overlay module found
	I1028 10:59:12.328530  541701 out.go:97] Using the docker driver based on user configuration
	I1028 10:59:12.328562  541701 start.go:297] selected driver: docker
	I1028 10:59:12.328569  541701 start.go:901] validating driver "docker" against <nil>
	I1028 10:59:12.328661  541701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:59:12.380567  541701 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-28 10:59:12.370976998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 10:59:12.380872  541701 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 10:59:12.381573  541701 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1028 10:59:12.381765  541701 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 10:59:12.383856  541701 out.go:169] Using Docker driver with root privileges
	I1028 10:59:12.385385  541701 cni.go:84] Creating CNI manager for ""
	I1028 10:59:12.385461  541701 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1028 10:59:12.385473  541701 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 10:59:12.385537  541701 start.go:340] cluster config:
	{Name:download-only-337919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-337919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:59:12.387070  541701 out.go:97] Starting "download-only-337919" primary control-plane node in "download-only-337919" cluster
	I1028 10:59:12.387100  541701 cache.go:121] Beginning downloading kic base image for docker with crio
	I1028 10:59:12.388618  541701 out.go:97] Pulling base image v0.0.45-1729876044-19868 ...
	I1028 10:59:12.388651  541701 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 10:59:12.388740  541701 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1028 10:59:12.405173  541701 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e to local cache
	I1028 10:59:12.405313  541701 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local cache directory
	I1028 10:59:12.405332  541701 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local cache directory, skipping pull
	I1028 10:59:12.405337  541701 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in cache, skipping pull
	I1028 10:59:12.405345  541701 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e as a tarball
	I1028 10:59:12.417781  541701 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 10:59:12.417830  541701 cache.go:56] Caching tarball of preloaded images
	I1028 10:59:12.422053  541701 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 10:59:12.424044  541701 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1028 10:59:12.424078  541701 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1028 10:59:12.447558  541701 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/19876-533928/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 10:59:16.643290  541701 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1028 10:59:16.643394  541701 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19876-533928/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-337919 host does not exist
	  To start a cluster, run: "minikube start -p download-only-337919"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-337919
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.13s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-026471 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-026471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-026471
--- PASS: TestDownloadOnlyKic (1.13s)

                                                
                                    
x
+
TestBinaryMirror (0.78s)

                                                
                                                
=== RUN   TestBinaryMirror
I1028 10:59:19.861489  541347 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-985535 --alsologtostderr --binary-mirror http://127.0.0.1:40257 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-985535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-985535
--- PASS: TestBinaryMirror (0.78s)

                                                
                                    
x
+
TestOffline (88.39s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-648121 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-648121 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m23.674350554s)
helpers_test.go:175: Cleaning up "offline-crio-648121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-648121
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-648121: (4.713104267s)
--- PASS: TestOffline (88.39s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-673472
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-673472: exit status 85 (64.439679ms)

                                                
                                                
-- stdout --
	* Profile "addons-673472" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-673472"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-673472
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-673472: exit status 85 (66.315855ms)

                                                
                                                
-- stdout --
	* Profile "addons-673472" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-673472"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (201.01s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-673472 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-673472 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m21.012586085s)
--- PASS: TestAddons/Setup (201.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-673472 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-673472 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-673472 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-673472 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bf52917d-1928-47c1-8a9f-768114df73d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bf52917d-1928-47c1-8a9f-768114df73d9] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004832911s
addons_test.go:633: (dbg) Run:  kubectl --context addons-673472 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-673472 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-673472 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.460918ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-lmvk5" [bf3603f0-8ec8-43cc-b75c-299459db5001] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002749613s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-24mvc" [94641dc2-0fe0-44ee-8265-3d276479b3ff] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004058999s
addons_test.go:331: (dbg) Run:  kubectl --context addons-673472 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-673472 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-673472 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.862372448s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 ip
2024/10/28 11:03:15 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dznbx" [3692bcfc-35d5-40e9-b87d-70fb4696c53d] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004108796s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-673472 addons disable inspektor-gadget --alsologtostderr -v=1: (5.741803769s)
--- PASS: TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.03s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1028 11:03:18.512952  541347 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1028 11:03:18.522300  541347 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1028 11:03:18.522331  541347 kapi.go:107] duration metric: took 9.421921ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 9.431765ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-673472 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-673472 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d64c6f49-3660-4e5f-b5b9-4161d7e640d1] Pending
helpers_test.go:344: "task-pv-pod" [d64c6f49-3660-4e5f-b5b9-4161d7e640d1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d64c6f49-3660-4e5f-b5b9-4161d7e640d1] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.00399969s
addons_test.go:511: (dbg) Run:  kubectl --context addons-673472 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-673472 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-673472 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-673472 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-673472 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-673472 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-673472 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [eaa9702d-ee08-4b6d-b889-be03cf65a689] Pending
helpers_test.go:344: "task-pv-pod-restore" [eaa9702d-ee08-4b6d-b889-be03cf65a689] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [eaa9702d-ee08-4b6d-b889-be03cf65a689] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003713874s
addons_test.go:553: (dbg) Run:  kubectl --context addons-673472 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-673472 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-673472 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-673472 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.556951718s)
--- PASS: TestAddons/parallel/CSI (44.03s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-673472 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-fsk44" [a1242f87-5dd6-4bce-9ef4-b1b19e60b66d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-fsk44" [a1242f87-5dd6-4bce-9ef4-b1b19e60b66d] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004292508s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-673472 addons disable headlamp --alsologtostderr -v=1: (5.782191329s)
--- PASS: TestAddons/parallel/Headlamp (17.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-h4rxp" [7c2808e2-1b9c-451c-8013-ac6ac88904fa] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004490795s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (50.96s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-673472 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-673472 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673472 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1822845d-8ffe-479c-ad7f-24872a5372a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1822845d-8ffe-479c-ad7f-24872a5372a5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1822845d-8ffe-479c-ad7f-24872a5372a5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004443503s
addons_test.go:906: (dbg) Run:  kubectl --context addons-673472 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 ssh "cat /opt/local-path-provisioner/pvc-b5123f9c-13e2-4f3b-9621-6a638e949257_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-673472 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-673472 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-673472 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.106559211s)
--- PASS: TestAddons/parallel/LocalPath (50.96s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zktff" [1db498a0-7243-4eed-9b71-4a44ffadbf48] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004283369s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-x6p9x" [e23e76d9-84b6-4316-8b61-92cd3be29a26] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00358964s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-673472 addons disable yakd --alsologtostderr -v=1: (5.720178173s)
--- PASS: TestAddons/parallel/Yakd (11.72s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-rbj2l" [06398681-9fc4-40ad-bf57-1dfbcab84b18] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.004499153s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.11s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-673472
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-673472: (11.8403492s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-673472
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-673472
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-673472
--- PASS: TestAddons/StoppedEnableDisable (12.11s)

                                                
                                    
x
+
TestCertOptions (28.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-798980 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-798980 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.328855133s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-798980 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-798980 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-798980 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-798980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-798980
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-798980: (3.013547147s)
--- PASS: TestCertOptions (28.10s)

                                                
                                    
x
+
TestCertExpiration (222.84s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-023343 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-023343 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.598039909s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-023343 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-023343 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (15.730983972s)
helpers_test.go:175: Cleaning up "cert-expiration-023343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-023343
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-023343: (2.506592892s)
--- PASS: TestCertExpiration (222.84s)

                                                
                                    
x
+
TestForceSystemdFlag (28.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-244039 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-244039 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.86346325s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-244039 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-244039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-244039
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-244039: (2.514106182s)
--- PASS: TestForceSystemdFlag (28.74s)

                                                
                                    
x
+
TestForceSystemdEnv (23.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-956216 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-956216 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.252079446s)
helpers_test.go:175: Cleaning up "force-systemd-env-956216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-956216
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-956216: (2.1058471s)
--- PASS: TestForceSystemdEnv (23.36s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.22s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1028 11:38:58.277958  541347 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 11:38:58.278092  541347 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1028 11:38:58.337831  541347 install.go:62] docker-machine-driver-kvm2: exit status 1
W1028 11:38:58.338155  541347 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1028 11:38:58.338222  541347 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1467226857/001/docker-machine-driver-kvm2
I1028 11:38:58.476408  541347 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1467226857/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc000793c80 gz:0xc000793c88 tar:0xc000793c10 tar.bz2:0xc000793c40 tar.gz:0xc000793c50 tar.xz:0xc000793c60 tar.zst:0xc000793c70 tbz2:0xc000793c40 tgz:0xc000793c50 txz:0xc000793c60 tzst:0xc000793c70 xz:0xc000793c90 zip:0xc000793ca0 zst:0xc000793c98] Getters:map[file:0xc001dc5c20 http:0xc0008b9400 https:0xc0008b9450] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1028 11:38:58.476463  541347 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1467226857/001/docker-machine-driver-kvm2
I1028 11:38:58.999466  541347 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 11:38:58.999563  541347 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1028 11:38:59.030439  541347 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1028 11:38:59.030486  541347 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1028 11:38:59.030581  541347 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1028 11:38:59.030623  541347 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1467226857/002/docker-machine-driver-kvm2
I1028 11:38:59.053106  541347 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1467226857/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc000793c80 gz:0xc000793c88 tar:0xc000793c10 tar.bz2:0xc000793c40 tar.gz:0xc000793c50 tar.xz:0xc000793c60 tar.zst:0xc000793c70 tbz2:0xc000793c40 tgz:0xc000793c50 txz:0xc000793c60 tzst:0xc000793c70 xz:0xc000793c90 zip:0xc000793ca0 zst:0xc000793c98] Getters:map[file:0xc001ceda60 http:0xc00080e0a0 https:0xc00080e0f0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1028 11:38:59.053229  541347 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1467226857/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.22s)

                                                
                                    
x
+
TestErrorSpam/setup (23.2s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-244101 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-244101 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-244101 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-244101 --driver=docker  --container-runtime=crio: (23.198308643s)
--- PASS: TestErrorSpam/setup (23.20s)

                                                
                                    
x
+
TestErrorSpam/start (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 start --dry-run
--- PASS: TestErrorSpam/start (0.61s)

                                                
                                    
x
+
TestErrorSpam/status (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 status
--- PASS: TestErrorSpam/status (0.89s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 stop: (1.177884751s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244101 --log_dir /tmp/nospam-244101 stop
--- PASS: TestErrorSpam/stop (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19876-533928/.minikube/files/etc/test/nested/copy/541347/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-607680 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-607680 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.129644106s)
--- PASS: TestFunctional/serial/StartWithProxy (41.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1028 11:10:51.321150  541347 config.go:182] Loaded profile config "functional-607680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-607680 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-607680 --alsologtostderr -v=8: (27.135284368s)
functional_test.go:663: soft start took 27.136118205s for "functional-607680" cluster.
I1028 11:11:18.456855  541347 config.go:182] Loaded profile config "functional-607680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (27.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-607680 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-607680 /tmp/TestFunctionalserialCacheCmdcacheadd_local3489611125/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 cache add minikube-local-cache-test:functional-607680
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 cache delete minikube-local-cache-test:functional-607680
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-607680
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607680 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (266.634503ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 kubectl -- --context functional-607680 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-607680 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (28.93s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-607680 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-607680 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (28.927206215s)
functional_test.go:761: restart took 28.927364926s for "functional-607680" cluster.
I1028 11:11:53.852771  541347 config.go:182] Loaded profile config "functional-607680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (28.93s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-607680 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-607680 logs: (1.473163103s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 logs --file /tmp/TestFunctionalserialLogsFileCmd1451125092/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-607680 logs --file /tmp/TestFunctionalserialLogsFileCmd1451125092/001/logs.txt: (1.425304793s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-607680 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-607680
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-607680: exit status 115 (340.091265ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31753 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-607680 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607680 config get cpus: exit status 14 (78.847452ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607680 config get cpus: exit status 14 (71.765702ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-607680 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-607680 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 582708: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-607680 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-607680 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (169.240906ms)

                                                
                                                
-- stdout --
	* [functional-607680] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-533928/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-533928/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:12:24.765755  581891 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:12:24.765904  581891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:12:24.765916  581891 out.go:358] Setting ErrFile to fd 2...
	I1028 11:12:24.765923  581891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:12:24.766136  581891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
	I1028 11:12:24.766673  581891 out.go:352] Setting JSON to false
	I1028 11:12:24.767712  581891 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10489,"bootTime":1730103456,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:12:24.767832  581891 start.go:139] virtualization: kvm guest
	I1028 11:12:24.770159  581891 out.go:177] * [functional-607680] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:12:24.771680  581891 notify.go:220] Checking for updates...
	I1028 11:12:24.771684  581891 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:12:24.772937  581891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:12:24.774221  581891 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-533928/kubeconfig
	I1028 11:12:24.777845  581891 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-533928/.minikube
	I1028 11:12:24.779354  581891 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:12:24.780719  581891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:12:24.782636  581891 config.go:182] Loaded profile config "functional-607680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:12:24.783416  581891 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:12:24.808391  581891 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 11:12:24.808492  581891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 11:12:24.870549  581891 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:72 SystemTime:2024-10-28 11:12:24.858954698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 11:12:24.870670  581891 docker.go:318] overlay module found
	I1028 11:12:24.873792  581891 out.go:177] * Using the docker driver based on existing profile
	I1028 11:12:24.875140  581891 start.go:297] selected driver: docker
	I1028 11:12:24.875160  581891 start.go:901] validating driver "docker" against &{Name:functional-607680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-607680 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:12:24.875304  581891 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:12:24.877698  581891 out.go:201] 
	W1028 11:12:24.879640  581891 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1028 11:12:24.881497  581891 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-607680 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-607680 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-607680 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (174.642274ms)

                                                
                                                
-- stdout --
	* [functional-607680] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-533928/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-533928/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:12:25.189660  582165 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:12:25.189829  582165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:12:25.189845  582165 out.go:358] Setting ErrFile to fd 2...
	I1028 11:12:25.189852  582165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:12:25.190192  582165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
	I1028 11:12:25.190770  582165 out.go:352] Setting JSON to false
	I1028 11:12:25.191877  582165 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10489,"bootTime":1730103456,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:12:25.191966  582165 start.go:139] virtualization: kvm guest
	I1028 11:12:25.194465  582165 out.go:177] * [functional-607680] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1028 11:12:25.196252  582165 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:12:25.196318  582165 notify.go:220] Checking for updates...
	I1028 11:12:25.199309  582165 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:12:25.201084  582165 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-533928/kubeconfig
	I1028 11:12:25.202698  582165 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-533928/.minikube
	I1028 11:12:25.204320  582165 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:12:25.206287  582165 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:12:25.208462  582165 config.go:182] Loaded profile config "functional-607680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:12:25.209089  582165 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:12:25.238855  582165 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 11:12:25.239029  582165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 11:12:25.295079  582165 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-28 11:12:25.285081081 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 11:12:25.295202  582165 docker.go:318] overlay module found
	I1028 11:12:25.297813  582165 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1028 11:12:25.299449  582165 start.go:297] selected driver: docker
	I1028 11:12:25.299472  582165 start.go:901] validating driver "docker" against &{Name:functional-607680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-607680 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:12:25.299606  582165 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:12:25.301812  582165 out.go:201] 
	W1028 11:12:25.303074  582165 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1028 11:12:25.304711  582165 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-607680 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-607680 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9tnnc" [f8a5ae58-ee0f-44dc-b7b8-d8711e56e1a1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9tnnc" [f8a5ae58-ee0f-44dc-b7b8-d8711e56e1a1] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003707669s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32304
functional_test.go:1675: http://192.168.49.2:32304: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-9tnnc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32304
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.84s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3d056887-6c4b-499a-a6d6-207fb1b82f5d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00375294s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-607680 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-607680 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-607680 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-607680 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5fa212d5-6b39-4ef1-af1b-9404450a214d] Pending
helpers_test.go:344: "sp-pod" [5fa212d5-6b39-4ef1-af1b-9404450a214d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5fa212d5-6b39-4ef1-af1b-9404450a214d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.005110644s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-607680 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-607680 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-607680 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [85d8d94a-285e-49de-8523-a351d70161f6] Pending
helpers_test.go:344: "sp-pod" [85d8d94a-285e-49de-8523-a351d70161f6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [85d8d94a-285e-49de-8523-a351d70161f6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007372471s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-607680 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.80s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh -n functional-607680 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 cp functional-607680:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4120845829/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh -n functional-607680 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh -n functional-607680 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-607680 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-cz9m5" [e42b5462-094f-491c-add5-baaa3e4f8003] Pending
helpers_test.go:344: "mysql-6cdb49bbb-cz9m5" [e42b5462-094f-491c-add5-baaa3e4f8003] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-cz9m5" [e42b5462-094f-491c-add5-baaa3e4f8003] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.003711508s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-607680 exec mysql-6cdb49bbb-cz9m5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-607680 exec mysql-6cdb49bbb-cz9m5 -- mysql -ppassword -e "show databases;": exit status 1 (107.636814ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:12:20.605745  541347 retry.go:31] will retry after 1.383638823s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-607680 exec mysql-6cdb49bbb-cz9m5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-607680 exec mysql-6cdb49bbb-cz9m5 -- mysql -ppassword -e "show databases;": exit status 1 (110.997651ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:12:22.101190  541347 retry.go:31] will retry after 2.094801861s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-607680 exec mysql-6cdb49bbb-cz9m5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/541347/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "sudo cat /etc/test/nested/copy/541347/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/541347.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "sudo cat /etc/ssl/certs/541347.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/541347.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "sudo cat /usr/share/ca-certificates/541347.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5413472.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "sudo cat /etc/ssl/certs/5413472.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5413472.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "sudo cat /usr/share/ca-certificates/5413472.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-607680 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607680 ssh "sudo systemctl is-active docker": exit status 1 (269.396086ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607680 ssh "sudo systemctl is-active containerd": exit status 1 (259.361972ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-607680 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-607680 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-jgjlv" [0b57b4c3-9553-4d1c-9b18-8771a20816ea] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-jgjlv" [0b57b4c3-9553-4d1c-9b18-8771a20816ea] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004334541s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-607680 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-607680 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-607680 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 578518: os: process already finished
helpers_test.go:508: unable to kill pid 578206: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-607680 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-607680 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-607680 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [368acfdb-21fe-4f62-ab8b-2200e4a936d5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [368acfdb-21fe-4f62-ab8b-2200e4a936d5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 18.004364062s
I1028 11:12:21.865380  541347 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 service list -o json
functional_test.go:1494: Took "598.268655ms" to run "out/minikube-linux-amd64 -p functional-607680 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30110
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30110
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-607680 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.10.181 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-607680 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-607680 /tmp/TestFunctionalparallelMountCmdany-port86821772/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1730113942957597489" to /tmp/TestFunctionalparallelMountCmdany-port86821772/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1730113942957597489" to /tmp/TestFunctionalparallelMountCmdany-port86821772/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1730113942957597489" to /tmp/TestFunctionalparallelMountCmdany-port86821772/001/test-1730113942957597489
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607680 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (294.270404ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 11:12:23.252188  541347 retry.go:31] will retry after 479.519831ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 28 11:12 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 28 11:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 28 11:12 test-1730113942957597489
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh cat /mount-9p/test-1730113942957597489
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-607680 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [720df84e-6d39-4beb-bd13-6cafe963900d] Pending
helpers_test.go:344: "busybox-mount" [720df84e-6d39-4beb-bd13-6cafe963900d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [720df84e-6d39-4beb-bd13-6cafe963900d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [720df84e-6d39-4beb-bd13-6cafe963900d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004652216s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-607680 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-607680 /tmp/TestFunctionalparallelMountCmdany-port86821772/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "312.222353ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "53.47607ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "322.62471ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "63.771337ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-607680 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-607680
localhost/kicbase/echo-server:functional-607680
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-607680 image ls --format short --alsologtostderr:
I1028 11:12:33.380390  585057 out.go:345] Setting OutFile to fd 1 ...
I1028 11:12:33.380902  585057 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:12:33.380916  585057 out.go:358] Setting ErrFile to fd 2...
I1028 11:12:33.380922  585057 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:12:33.381213  585057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
I1028 11:12:33.382070  585057 config.go:182] Loaded profile config "functional-607680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:12:33.382224  585057 config.go:182] Loaded profile config "functional-607680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:12:33.382805  585057 cli_runner.go:164] Run: docker container inspect functional-607680 --format={{.State.Status}}
I1028 11:12:33.404822  585057 ssh_runner.go:195] Run: systemctl --version
I1028 11:12:33.404901  585057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-607680
I1028 11:12:33.428791  585057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/functional-607680/id_rsa Username:docker}
I1028 11:12:33.521568  585057 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-607680 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 3b25b682ea82b | 196MB  |
| localhost/minikube-local-cache-test     | functional-607680  | 5f4838ac645f6 | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| docker.io/library/nginx                 | alpine             | cb8f91112b6b5 | 48.4MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-607680  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-607680 image ls --format table --alsologtostderr:
I1028 11:12:33.908166  585482 out.go:345] Setting OutFile to fd 1 ...
I1028 11:12:33.908271  585482 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:12:33.908280  585482 out.go:358] Setting ErrFile to fd 2...
I1028 11:12:33.908284  585482 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:12:33.908559  585482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
I1028 11:12:33.909216  585482 config.go:182] Loaded profile config "functional-607680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:12:33.909316  585482 config.go:182] Loaded profile config "functional-607680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:12:33.909696  585482 cli_runner.go:164] Run: docker container inspect functional-607680 --format={{.State.Status}}
I1028 11:12:33.929847  585482 ssh_runner.go:195] Run: systemctl --version
I1028 11:12:33.929909  585482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-607680
I1028 11:12:33.951673  585482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/functional-607680/id_rsa Username:docker}
I1028 11:12:34.041079  585482 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-607680 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f4838ac645f6c84004f4bfe9719ca450cb208c2fb7095b49cab53f69bb9df95","repoDigests":["localhost/minikube-local-cache-test@sha256:385682b87bee429e1ebb9fbfed56eb45784a1aaa6b0b69c1f46e2475027d64df"],"repoTags":["localhost/minikube-local-cache-test:functional-607680"],"size":"3330"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["reg
istry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df","repoDigests":["docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb","docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818008"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-607680"],"size":"4943877"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f24
7bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"847c7bc1a541865e150af08318f49d02d0e0c
ff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250","docker.io/library/nginx@sha256:ae136e431e76e12e5d84979ea5e2ffff4dd9589c2435c8bb9e33e6c3960111d3"],"repoTags":["docker.io/library/nginx:alp
ine"],"size":"48414943"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500
d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha2
56:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-607680 image ls --format json --alsologtostderr:
I1028 11:12:33.673678  585281 out.go:345] Setting OutFile to fd 1 ...
I1028 11:12:33.673906  585281 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:12:33.673918  585281 out.go:358] Setting ErrFile to fd 2...
I1028 11:12:33.673922  585281 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:12:33.674106  585281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
I1028 11:12:33.674762  585281 config.go:182] Loaded profile config "functional-607680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:12:33.674865  585281 config.go:182] Loaded profile config "functional-607680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:12:33.675483  585281 cli_runner.go:164] Run: docker container inspect functional-607680 --format={{.State.Status}}
I1028 11:12:33.698935  585281 ssh_runner.go:195] Run: systemctl --version
I1028 11:12:33.699020  585281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-607680
I1028 11:12:33.719655  585281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/functional-607680/id_rsa Username:docker}
I1028 11:12:33.813528  585281 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-607680 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df
repoDigests:
- docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
- docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26
repoTags:
- docker.io/library/nginx:latest
size: "195818008"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 5f4838ac645f6c84004f4bfe9719ca450cb208c2fb7095b49cab53f69bb9df95
repoDigests:
- localhost/minikube-local-cache-test@sha256:385682b87bee429e1ebb9fbfed56eb45784a1aaa6b0b69c1f46e2475027d64df
repoTags:
- localhost/minikube-local-cache-test:functional-607680
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
- docker.io/library/nginx@sha256:ae136e431e76e12e5d84979ea5e2ffff4dd9589c2435c8bb9e33e6c3960111d3
repoTags:
- docker.io/library/nginx:alpine
size: "48414943"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-607680
size: "4943877"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-607680 image ls --format yaml --alsologtostderr:
I1028 11:12:33.430535  585098 out.go:345] Setting OutFile to fd 1 ...
I1028 11:12:33.430821  585098 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:12:33.430831  585098 out.go:358] Setting ErrFile to fd 2...
I1028 11:12:33.430835  585098 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:12:33.431087  585098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
I1028 11:12:33.431843  585098 config.go:182] Loaded profile config "functional-607680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:12:33.431959  585098 config.go:182] Loaded profile config "functional-607680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:12:33.432392  585098 cli_runner.go:164] Run: docker container inspect functional-607680 --format={{.State.Status}}
I1028 11:12:33.452398  585098 ssh_runner.go:195] Run: systemctl --version
I1028 11:12:33.452465  585098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-607680
I1028 11:12:33.470910  585098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/functional-607680/id_rsa Username:docker}
I1028 11:12:33.557355  585098 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607680 ssh pgrep buildkitd: exit status 1 (280.610499ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image build -t localhost/my-image:functional-607680 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-607680 image build -t localhost/my-image:functional-607680 testdata/build --alsologtostderr: (2.124505713s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-607680 image build -t localhost/my-image:functional-607680 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 48c601cd14c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-607680
--> 67da75c541a
Successfully tagged localhost/my-image:functional-607680
67da75c541a2cfb9286a5a6690380f5c323c131212c2b7bdbede32add241e50a
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-607680 image build -t localhost/my-image:functional-607680 testdata/build --alsologtostderr:
I1028 11:12:33.902611  585481 out.go:345] Setting OutFile to fd 1 ...
I1028 11:12:33.902914  585481 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:12:33.902927  585481 out.go:358] Setting ErrFile to fd 2...
I1028 11:12:33.902932  585481 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:12:33.903551  585481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
I1028 11:12:33.904958  585481 config.go:182] Loaded profile config "functional-607680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:12:33.905906  585481 config.go:182] Loaded profile config "functional-607680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:12:33.906719  585481 cli_runner.go:164] Run: docker container inspect functional-607680 --format={{.State.Status}}
I1028 11:12:33.925728  585481 ssh_runner.go:195] Run: systemctl --version
I1028 11:12:33.925796  585481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-607680
I1028 11:12:33.946378  585481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/functional-607680/id_rsa Username:docker}
I1028 11:12:34.033921  585481 build_images.go:161] Building image from path: /tmp/build.456974432.tar
I1028 11:12:34.034001  585481 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1028 11:12:34.044223  585481 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.456974432.tar
I1028 11:12:34.047996  585481 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.456974432.tar: stat -c "%s %y" /var/lib/minikube/build/build.456974432.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.456974432.tar': No such file or directory
I1028 11:12:34.048042  585481 ssh_runner.go:362] scp /tmp/build.456974432.tar --> /var/lib/minikube/build/build.456974432.tar (3072 bytes)
I1028 11:12:34.075889  585481 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.456974432
I1028 11:12:34.084678  585481 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.456974432 -xf /var/lib/minikube/build/build.456974432.tar
I1028 11:12:34.094240  585481 crio.go:315] Building image: /var/lib/minikube/build/build.456974432
I1028 11:12:34.094309  585481 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-607680 /var/lib/minikube/build/build.456974432 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1028 11:12:35.948137  585481 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-607680 /var/lib/minikube/build/build.456974432 --cgroup-manager=cgroupfs: (1.853796666s)
I1028 11:12:35.948236  585481 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.456974432
I1028 11:12:35.956885  585481 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.456974432.tar
I1028 11:12:35.965197  585481 build_images.go:217] Built localhost/my-image:functional-607680 from /tmp/build.456974432.tar
I1028 11:12:35.965239  585481 build_images.go:133] succeeded building to: functional-607680
I1028 11:12:35.965245  585481 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-607680
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image load --daemon kicbase/echo-server:functional-607680 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-607680 image load --daemon kicbase/echo-server:functional-607680 --alsologtostderr: (1.304287717s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image load --daemon kicbase/echo-server:functional-607680 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-607680
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image load --daemon kicbase/echo-server:functional-607680 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image save kicbase/echo-server:functional-607680 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image rm kicbase/echo-server:functional-607680 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-607680 image rm kicbase/echo-server:functional-607680 --alsologtostderr: (1.985784065s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-607680 /tmp/TestFunctionalparallelMountCmdspecific-port2949562462/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607680 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (285.880566ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 11:12:31.026170  541347 retry.go:31] will retry after 705.525707ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-607680 /tmp/TestFunctionalparallelMountCmdspecific-port2949562462/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607680 ssh "sudo umount -f /mount-9p": exit status 1 (274.997345ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-607680 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-607680 /tmp/TestFunctionalparallelMountCmdspecific-port2949562462/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-607680
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 image save --daemon kicbase/echo-server:functional-607680 --alsologtostderr
2024/10/28 11:12:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-607680
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-607680 /tmp/TestFunctionalparallelMountCmdVerifyCleanup599265784/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-607680 /tmp/TestFunctionalparallelMountCmdVerifyCleanup599265784/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-607680 /tmp/TestFunctionalparallelMountCmdVerifyCleanup599265784/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607680 ssh "findmnt -T" /mount1: exit status 1 (428.035041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 11:12:33.241211  541347 retry.go:31] will retry after 610.378483ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-607680 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-607680 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-607680 /tmp/TestFunctionalparallelMountCmdVerifyCleanup599265784/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-607680 /tmp/TestFunctionalparallelMountCmdVerifyCleanup599265784/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-607680 /tmp/TestFunctionalparallelMountCmdVerifyCleanup599265784/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-607680
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-607680
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-607680
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (153.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-159076 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1028 11:12:42.329173  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:12:42.335713  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:12:42.347294  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:12:42.368791  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:12:42.410333  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:12:42.491815  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:12:42.653214  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:12:42.974944  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:12:43.617030  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:12:44.898673  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:12:47.460594  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:12:52.582381  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:13:02.824135  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:13:23.306004  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:14:04.268343  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-159076 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m32.522901187s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (153.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-159076 -- rollout status deployment/busybox: (6.371635462s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- exec busybox-7dff88458-h7dd6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- exec busybox-7dff88458-qd67f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- exec busybox-7dff88458-tsgvg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- exec busybox-7dff88458-h7dd6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- exec busybox-7dff88458-qd67f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- exec busybox-7dff88458-tsgvg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- exec busybox-7dff88458-h7dd6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- exec busybox-7dff88458-qd67f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- exec busybox-7dff88458-tsgvg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- exec busybox-7dff88458-h7dd6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- exec busybox-7dff88458-h7dd6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- exec busybox-7dff88458-qd67f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- exec busybox-7dff88458-qd67f -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- exec busybox-7dff88458-tsgvg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-159076 -- exec busybox-7dff88458-tsgvg -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (33.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-159076 -v=7 --alsologtostderr
E1028 11:15:26.190036  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-159076 -v=7 --alsologtostderr: (33.022519759s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (33.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-159076 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp testdata/cp-test.txt ha-159076:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3037018542/001/cp-test_ha-159076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076:/home/docker/cp-test.txt ha-159076-m02:/home/docker/cp-test_ha-159076_ha-159076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m02 "sudo cat /home/docker/cp-test_ha-159076_ha-159076-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076:/home/docker/cp-test.txt ha-159076-m03:/home/docker/cp-test_ha-159076_ha-159076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m03 "sudo cat /home/docker/cp-test_ha-159076_ha-159076-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076:/home/docker/cp-test.txt ha-159076-m04:/home/docker/cp-test_ha-159076_ha-159076-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m04 "sudo cat /home/docker/cp-test_ha-159076_ha-159076-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp testdata/cp-test.txt ha-159076-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3037018542/001/cp-test_ha-159076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076-m02:/home/docker/cp-test.txt ha-159076:/home/docker/cp-test_ha-159076-m02_ha-159076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076 "sudo cat /home/docker/cp-test_ha-159076-m02_ha-159076.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076-m02:/home/docker/cp-test.txt ha-159076-m03:/home/docker/cp-test_ha-159076-m02_ha-159076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m03 "sudo cat /home/docker/cp-test_ha-159076-m02_ha-159076-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076-m02:/home/docker/cp-test.txt ha-159076-m04:/home/docker/cp-test_ha-159076-m02_ha-159076-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m04 "sudo cat /home/docker/cp-test_ha-159076-m02_ha-159076-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp testdata/cp-test.txt ha-159076-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3037018542/001/cp-test_ha-159076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076-m03:/home/docker/cp-test.txt ha-159076:/home/docker/cp-test_ha-159076-m03_ha-159076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076 "sudo cat /home/docker/cp-test_ha-159076-m03_ha-159076.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076-m03:/home/docker/cp-test.txt ha-159076-m02:/home/docker/cp-test_ha-159076-m03_ha-159076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m02 "sudo cat /home/docker/cp-test_ha-159076-m03_ha-159076-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076-m03:/home/docker/cp-test.txt ha-159076-m04:/home/docker/cp-test_ha-159076-m03_ha-159076-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m04 "sudo cat /home/docker/cp-test_ha-159076-m03_ha-159076-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp testdata/cp-test.txt ha-159076-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3037018542/001/cp-test_ha-159076-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076-m04:/home/docker/cp-test.txt ha-159076:/home/docker/cp-test_ha-159076-m04_ha-159076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076 "sudo cat /home/docker/cp-test_ha-159076-m04_ha-159076.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076-m04:/home/docker/cp-test.txt ha-159076-m02:/home/docker/cp-test_ha-159076-m04_ha-159076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m02 "sudo cat /home/docker/cp-test_ha-159076-m04_ha-159076-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 cp ha-159076-m04:/home/docker/cp-test.txt ha-159076-m03:/home/docker/cp-test_ha-159076-m04_ha-159076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 ssh -n ha-159076-m03 "sudo cat /home/docker/cp-test_ha-159076-m04_ha-159076-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-159076 node stop m02 -v=7 --alsologtostderr: (11.879745632s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-159076 status -v=7 --alsologtostderr: exit status 7 (656.084098ms)

                                                
                                                
-- stdout --
	ha-159076
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-159076-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-159076-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-159076-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:16:27.054681  606785 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:16:27.054858  606785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:16:27.054870  606785 out.go:358] Setting ErrFile to fd 2...
	I1028 11:16:27.054876  606785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:16:27.055076  606785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
	I1028 11:16:27.055270  606785 out.go:352] Setting JSON to false
	I1028 11:16:27.055308  606785 mustload.go:65] Loading cluster: ha-159076
	I1028 11:16:27.055447  606785 notify.go:220] Checking for updates...
	I1028 11:16:27.055799  606785 config.go:182] Loaded profile config "ha-159076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:16:27.055826  606785 status.go:174] checking status of ha-159076 ...
	I1028 11:16:27.056311  606785 cli_runner.go:164] Run: docker container inspect ha-159076 --format={{.State.Status}}
	I1028 11:16:27.073600  606785 status.go:371] ha-159076 host status = "Running" (err=<nil>)
	I1028 11:16:27.073630  606785 host.go:66] Checking if "ha-159076" exists ...
	I1028 11:16:27.073979  606785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-159076
	I1028 11:16:27.093675  606785 host.go:66] Checking if "ha-159076" exists ...
	I1028 11:16:27.093955  606785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:16:27.094023  606785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159076
	I1028 11:16:27.113237  606785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/ha-159076/id_rsa Username:docker}
	I1028 11:16:27.202222  606785 ssh_runner.go:195] Run: systemctl --version
	I1028 11:16:27.206356  606785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:16:27.217265  606785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 11:16:27.267098  606785 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-10-28 11:16:27.257389035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 11:16:27.267735  606785 kubeconfig.go:125] found "ha-159076" server: "https://192.168.49.254:8443"
	I1028 11:16:27.267766  606785 api_server.go:166] Checking apiserver status ...
	I1028 11:16:27.267803  606785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:16:27.280448  606785 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1484/cgroup
	I1028 11:16:27.290448  606785 api_server.go:182] apiserver freezer: "8:freezer:/docker/97a2c02dacd129275da54da8354850c5a0ef0cc289a221592d59925ec4ac5138/crio/crio-228f3a83e8bd48faf65a4dc4e644874d8c56514d2f49166e65971ec384ceb279"
	I1028 11:16:27.290521  606785 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/97a2c02dacd129275da54da8354850c5a0ef0cc289a221592d59925ec4ac5138/crio/crio-228f3a83e8bd48faf65a4dc4e644874d8c56514d2f49166e65971ec384ceb279/freezer.state
	I1028 11:16:27.298831  606785 api_server.go:204] freezer state: "THAWED"
	I1028 11:16:27.298872  606785 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1028 11:16:27.302745  606785 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1028 11:16:27.302771  606785 status.go:463] ha-159076 apiserver status = Running (err=<nil>)
	I1028 11:16:27.302782  606785 status.go:176] ha-159076 status: &{Name:ha-159076 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:16:27.302801  606785 status.go:174] checking status of ha-159076-m02 ...
	I1028 11:16:27.303057  606785 cli_runner.go:164] Run: docker container inspect ha-159076-m02 --format={{.State.Status}}
	I1028 11:16:27.322232  606785 status.go:371] ha-159076-m02 host status = "Stopped" (err=<nil>)
	I1028 11:16:27.322284  606785 status.go:384] host is not running, skipping remaining checks
	I1028 11:16:27.322298  606785 status.go:176] ha-159076-m02 status: &{Name:ha-159076-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:16:27.322347  606785 status.go:174] checking status of ha-159076-m03 ...
	I1028 11:16:27.322755  606785 cli_runner.go:164] Run: docker container inspect ha-159076-m03 --format={{.State.Status}}
	I1028 11:16:27.339355  606785 status.go:371] ha-159076-m03 host status = "Running" (err=<nil>)
	I1028 11:16:27.339381  606785 host.go:66] Checking if "ha-159076-m03" exists ...
	I1028 11:16:27.339685  606785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-159076-m03
	I1028 11:16:27.355720  606785 host.go:66] Checking if "ha-159076-m03" exists ...
	I1028 11:16:27.356043  606785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:16:27.356108  606785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159076-m03
	I1028 11:16:27.372859  606785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/ha-159076-m03/id_rsa Username:docker}
	I1028 11:16:27.462248  606785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:16:27.473616  606785 kubeconfig.go:125] found "ha-159076" server: "https://192.168.49.254:8443"
	I1028 11:16:27.473649  606785 api_server.go:166] Checking apiserver status ...
	I1028 11:16:27.473696  606785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:16:27.484229  606785 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup
	I1028 11:16:27.493640  606785 api_server.go:182] apiserver freezer: "8:freezer:/docker/87466b09a23f59594e382ea3c3baf9201e20044cc0f7f903b8eb898803ea3aab/crio/crio-bb78a5d41545cae1e9aad321d0941cf3015a26a5259d2e50356a38998bff42dc"
	I1028 11:16:27.493706  606785 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/87466b09a23f59594e382ea3c3baf9201e20044cc0f7f903b8eb898803ea3aab/crio/crio-bb78a5d41545cae1e9aad321d0941cf3015a26a5259d2e50356a38998bff42dc/freezer.state
	I1028 11:16:27.501984  606785 api_server.go:204] freezer state: "THAWED"
	I1028 11:16:27.502023  606785 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1028 11:16:27.505949  606785 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1028 11:16:27.505975  606785 status.go:463] ha-159076-m03 apiserver status = Running (err=<nil>)
	I1028 11:16:27.505984  606785 status.go:176] ha-159076-m03 status: &{Name:ha-159076-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:16:27.505999  606785 status.go:174] checking status of ha-159076-m04 ...
	I1028 11:16:27.506280  606785 cli_runner.go:164] Run: docker container inspect ha-159076-m04 --format={{.State.Status}}
	I1028 11:16:27.523577  606785 status.go:371] ha-159076-m04 host status = "Running" (err=<nil>)
	I1028 11:16:27.523606  606785 host.go:66] Checking if "ha-159076-m04" exists ...
	I1028 11:16:27.523893  606785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-159076-m04
	I1028 11:16:27.542097  606785 host.go:66] Checking if "ha-159076-m04" exists ...
	I1028 11:16:27.542377  606785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:16:27.542416  606785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159076-m04
	I1028 11:16:27.561435  606785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/ha-159076-m04/id_rsa Username:docker}
	I1028 11:16:27.646100  606785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:16:27.657220  606785 status.go:176] ha-159076-m04 status: &{Name:ha-159076-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-159076 node start m02 -v=7 --alsologtostderr: (18.703188832s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-159076 status -v=7 --alsologtostderr: (1.052010612s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (218.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-159076 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-159076 -v=7 --alsologtostderr
E1028 11:17:01.496940  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:17:01.503367  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:17:01.514840  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:17:01.536276  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:17:01.577746  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:17:01.659241  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:17:01.820820  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:17:02.143128  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:17:02.784544  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:17:04.065954  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:17:06.627987  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:17:11.749589  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:17:21.991025  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-159076 -v=7 --alsologtostderr: (36.688678468s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-159076 --wait=true -v=7 --alsologtostderr
E1028 11:17:42.328949  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:17:42.473397  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:18:10.031698  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:18:23.435148  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:19:45.356808  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-159076 --wait=true -v=7 --alsologtostderr: (3m1.691370052s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-159076
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (218.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-159076 node delete m03 -v=7 --alsologtostderr: (10.66115171s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-159076 stop -v=7 --alsologtostderr: (35.52214614s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-159076 status -v=7 --alsologtostderr: exit status 7 (104.064195ms)

                                                
                                                
-- stdout --
	ha-159076
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-159076-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-159076-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:21:15.162817  625525 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:21:15.163075  625525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:21:15.163085  625525 out.go:358] Setting ErrFile to fd 2...
	I1028 11:21:15.163089  625525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:21:15.163296  625525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
	I1028 11:21:15.163458  625525 out.go:352] Setting JSON to false
	I1028 11:21:15.163488  625525 mustload.go:65] Loading cluster: ha-159076
	I1028 11:21:15.163624  625525 notify.go:220] Checking for updates...
	I1028 11:21:15.163934  625525 config.go:182] Loaded profile config "ha-159076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:21:15.163959  625525 status.go:174] checking status of ha-159076 ...
	I1028 11:21:15.164411  625525 cli_runner.go:164] Run: docker container inspect ha-159076 --format={{.State.Status}}
	I1028 11:21:15.182119  625525 status.go:371] ha-159076 host status = "Stopped" (err=<nil>)
	I1028 11:21:15.182157  625525 status.go:384] host is not running, skipping remaining checks
	I1028 11:21:15.182166  625525 status.go:176] ha-159076 status: &{Name:ha-159076 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:21:15.182194  625525 status.go:174] checking status of ha-159076-m02 ...
	I1028 11:21:15.182455  625525 cli_runner.go:164] Run: docker container inspect ha-159076-m02 --format={{.State.Status}}
	I1028 11:21:15.199338  625525 status.go:371] ha-159076-m02 host status = "Stopped" (err=<nil>)
	I1028 11:21:15.199364  625525 status.go:384] host is not running, skipping remaining checks
	I1028 11:21:15.199373  625525 status.go:176] ha-159076-m02 status: &{Name:ha-159076-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:21:15.199401  625525 status.go:174] checking status of ha-159076-m04 ...
	I1028 11:21:15.199661  625525 cli_runner.go:164] Run: docker container inspect ha-159076-m04 --format={{.State.Status}}
	I1028 11:21:15.215898  625525 status.go:371] ha-159076-m04 host status = "Stopped" (err=<nil>)
	I1028 11:21:15.215929  625525 status.go:384] host is not running, skipping remaining checks
	I1028 11:21:15.215938  625525 status.go:176] ha-159076-m04 status: &{Name:ha-159076-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-159076 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1028 11:22:01.494626  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-159076 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (57.676267643s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (64.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-159076 --control-plane -v=7 --alsologtostderr
E1028 11:22:29.198568  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:22:42.328987  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-159076 --control-plane -v=7 --alsologtostderr: (1m3.508344385s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-159076 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (64.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.21s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-574987 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-574987 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (38.206409936s)
--- PASS: TestJSONOutput/start/Command (38.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-574987 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-574987 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-574987 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-574987 --output=json --user=testUser: (5.752836152s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-348012 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-348012 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (69.197268ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"437f77e7-f42b-431d-a187-299140e13f96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-348012] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a020c0d-c458-442a-b701-3de07638adbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19876"}}
	{"specversion":"1.0","id":"54b9f0cb-36fc-409f-956a-d35f74c01aea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"300069d3-f19a-47e6-b31c-525ff0c9b464","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19876-533928/kubeconfig"}}
	{"specversion":"1.0","id":"d050228f-7c4b-4d6c-b952-e92b57c3b626","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-533928/.minikube"}}
	{"specversion":"1.0","id":"695054e4-c9a9-4978-af08-81c976d6d812","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a5d0bd4f-fb04-4d12-8f8c-80ca82dd7b5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a23aa3bb-ac91-43fe-8165-7c1b6cb7398d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-348012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-348012
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.15s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-284005 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-284005 --network=: (28.076476551s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-284005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-284005
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-284005: (2.058146861s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.15s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-514863 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-514863 --network=bridge: (21.304144888s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-514863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-514863
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-514863: (1.857508487s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.18s)

                                                
                                    
x
+
TestKicExistingNetwork (25.99s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1028 11:25:10.688777  541347 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1028 11:25:10.705340  541347 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1028 11:25:10.705436  541347 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1028 11:25:10.705464  541347 cli_runner.go:164] Run: docker network inspect existing-network
W1028 11:25:10.722188  541347 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1028 11:25:10.722238  541347 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1028 11:25:10.722256  541347 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1028 11:25:10.722381  541347 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1028 11:25:10.740082  541347 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ca21edaae91 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5a:a6:34:a0} reservation:<nil>}
I1028 11:25:10.740555  541347 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000472330}
I1028 11:25:10.740582  541347 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1028 11:25:10.740625  541347 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1028 11:25:10.801803  541347 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-832720 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-832720 --network=existing-network: (23.891836169s)
helpers_test.go:175: Cleaning up "existing-network-832720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-832720
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-832720: (1.952042781s)
I1028 11:25:36.662465  541347 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.99s)

                                                
                                    
x
+
TestKicCustomSubnet (23.31s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-931462 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-931462 --subnet=192.168.60.0/24: (21.231722188s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-931462 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-931462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-931462
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-931462: (2.06070049s)
--- PASS: TestKicCustomSubnet (23.31s)

                                                
                                    
x
+
TestKicStaticIP (26.76s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-698427 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-698427 --static-ip=192.168.200.200: (24.595194725s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-698427 ip
helpers_test.go:175: Cleaning up "static-ip-698427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-698427
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-698427: (2.036945618s)
--- PASS: TestKicStaticIP (26.76s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (50.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-677523 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-677523 --driver=docker  --container-runtime=crio: (20.682158688s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-691269 --driver=docker  --container-runtime=crio
E1028 11:27:01.495302  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-691269 --driver=docker  --container-runtime=crio: (24.889374276s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-677523
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-691269
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-691269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-691269
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-691269: (1.860118572s)
helpers_test.go:175: Cleaning up "first-677523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-677523
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-677523: (2.23599261s)
--- PASS: TestMinikubeProfile (50.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-556062 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-556062 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.488467901s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-556062 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-570548 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-570548 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.58788708s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-570548 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-556062 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-556062 --alsologtostderr -v=5: (1.599343519s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-570548 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-570548
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-570548: (1.184458237s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.18s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-570548
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-570548: (6.181651852s)
E1028 11:27:42.328848  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/RestartStopped (7.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-570548 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-650956 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-650956 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m6.860568843s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650956 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650956 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-650956 -- rollout status deployment/busybox: (3.390637557s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650956 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650956 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650956 -- exec busybox-7dff88458-g66n5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-650956 -- exec busybox-7dff88458-g66n5 -- nslookup kubernetes.io: (1.204744252s)
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650956 -- exec busybox-7dff88458-m8h7f -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650956 -- exec busybox-7dff88458-g66n5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650956 -- exec busybox-7dff88458-m8h7f -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650956 -- exec busybox-7dff88458-g66n5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650956 -- exec busybox-7dff88458-m8h7f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.86s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650956 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650956 -- exec busybox-7dff88458-g66n5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650956 -- exec busybox-7dff88458-g66n5 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650956 -- exec busybox-7dff88458-m8h7f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650956 -- exec busybox-7dff88458-m8h7f -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-650956 -v 3 --alsologtostderr
E1028 11:29:05.393920  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-650956 -v 3 --alsologtostderr: (24.260292605s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.86s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-650956 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 cp testdata/cp-test.txt multinode-650956:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 cp multinode-650956:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1570788519/001/cp-test_multinode-650956.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 cp multinode-650956:/home/docker/cp-test.txt multinode-650956-m02:/home/docker/cp-test_multinode-650956_multinode-650956-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956-m02 "sudo cat /home/docker/cp-test_multinode-650956_multinode-650956-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 cp multinode-650956:/home/docker/cp-test.txt multinode-650956-m03:/home/docker/cp-test_multinode-650956_multinode-650956-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956-m03 "sudo cat /home/docker/cp-test_multinode-650956_multinode-650956-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 cp testdata/cp-test.txt multinode-650956-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 cp multinode-650956-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1570788519/001/cp-test_multinode-650956-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 cp multinode-650956-m02:/home/docker/cp-test.txt multinode-650956:/home/docker/cp-test_multinode-650956-m02_multinode-650956.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956 "sudo cat /home/docker/cp-test_multinode-650956-m02_multinode-650956.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 cp multinode-650956-m02:/home/docker/cp-test.txt multinode-650956-m03:/home/docker/cp-test_multinode-650956-m02_multinode-650956-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956-m03 "sudo cat /home/docker/cp-test_multinode-650956-m02_multinode-650956-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 cp testdata/cp-test.txt multinode-650956-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 cp multinode-650956-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1570788519/001/cp-test_multinode-650956-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 cp multinode-650956-m03:/home/docker/cp-test.txt multinode-650956:/home/docker/cp-test_multinode-650956-m03_multinode-650956.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956 "sudo cat /home/docker/cp-test_multinode-650956-m03_multinode-650956.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 cp multinode-650956-m03:/home/docker/cp-test.txt multinode-650956-m02:/home/docker/cp-test_multinode-650956-m03_multinode-650956-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 ssh -n multinode-650956-m02 "sudo cat /home/docker/cp-test_multinode-650956-m03_multinode-650956-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-650956 node stop m03: (1.184208271s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-650956 status: exit status 7 (459.221546ms)

                                                
                                                
-- stdout --
	multinode-650956
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-650956-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-650956-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-650956 status --alsologtostderr: exit status 7 (462.449021ms)

                                                
                                                
-- stdout --
	multinode-650956
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-650956-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-650956-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:29:34.760293  690327 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:29:34.760406  690327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:29:34.760414  690327 out.go:358] Setting ErrFile to fd 2...
	I1028 11:29:34.760418  690327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:29:34.760627  690327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
	I1028 11:29:34.760826  690327 out.go:352] Setting JSON to false
	I1028 11:29:34.760861  690327 mustload.go:65] Loading cluster: multinode-650956
	I1028 11:29:34.760913  690327 notify.go:220] Checking for updates...
	I1028 11:29:34.761276  690327 config.go:182] Loaded profile config "multinode-650956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:29:34.761300  690327 status.go:174] checking status of multinode-650956 ...
	I1028 11:29:34.761728  690327 cli_runner.go:164] Run: docker container inspect multinode-650956 --format={{.State.Status}}
	I1028 11:29:34.781654  690327 status.go:371] multinode-650956 host status = "Running" (err=<nil>)
	I1028 11:29:34.781701  690327 host.go:66] Checking if "multinode-650956" exists ...
	I1028 11:29:34.782009  690327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-650956
	I1028 11:29:34.798834  690327 host.go:66] Checking if "multinode-650956" exists ...
	I1028 11:29:34.799112  690327 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:29:34.799172  690327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-650956
	I1028 11:29:34.817266  690327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32905 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/multinode-650956/id_rsa Username:docker}
	I1028 11:29:34.906144  690327 ssh_runner.go:195] Run: systemctl --version
	I1028 11:29:34.910138  690327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:29:34.920570  690327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 11:29:34.970393  690327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-28 11:29:34.960392191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 11:29:34.971000  690327 kubeconfig.go:125] found "multinode-650956" server: "https://192.168.67.2:8443"
	I1028 11:29:34.971035  690327 api_server.go:166] Checking apiserver status ...
	I1028 11:29:34.971070  690327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:29:34.982031  690327 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	I1028 11:29:34.991248  690327 api_server.go:182] apiserver freezer: "8:freezer:/docker/5b5220e05c3a49750617df55f296c9e667ac7172d3e79c966471e55a917d06e6/crio/crio-c271353176b352d33d174446b4da2ad010233ec637ff24a202280bb7ac4e14f3"
	I1028 11:29:34.991332  690327 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5b5220e05c3a49750617df55f296c9e667ac7172d3e79c966471e55a917d06e6/crio/crio-c271353176b352d33d174446b4da2ad010233ec637ff24a202280bb7ac4e14f3/freezer.state
	I1028 11:29:34.999586  690327 api_server.go:204] freezer state: "THAWED"
	I1028 11:29:34.999625  690327 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1028 11:29:35.004666  690327 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1028 11:29:35.004697  690327 status.go:463] multinode-650956 apiserver status = Running (err=<nil>)
	I1028 11:29:35.004708  690327 status.go:176] multinode-650956 status: &{Name:multinode-650956 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:29:35.004728  690327 status.go:174] checking status of multinode-650956-m02 ...
	I1028 11:29:35.005106  690327 cli_runner.go:164] Run: docker container inspect multinode-650956-m02 --format={{.State.Status}}
	I1028 11:29:35.022057  690327 status.go:371] multinode-650956-m02 host status = "Running" (err=<nil>)
	I1028 11:29:35.022086  690327 host.go:66] Checking if "multinode-650956-m02" exists ...
	I1028 11:29:35.022417  690327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-650956-m02
	I1028 11:29:35.039207  690327 host.go:66] Checking if "multinode-650956-m02" exists ...
	I1028 11:29:35.039502  690327 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:29:35.039548  690327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-650956-m02
	I1028 11:29:35.056957  690327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32910 SSHKeyPath:/home/jenkins/minikube-integration/19876-533928/.minikube/machines/multinode-650956-m02/id_rsa Username:docker}
	I1028 11:29:35.142094  690327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:29:35.152866  690327 status.go:176] multinode-650956-m02 status: &{Name:multinode-650956-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:29:35.152908  690327 status.go:174] checking status of multinode-650956-m03 ...
	I1028 11:29:35.153189  690327 cli_runner.go:164] Run: docker container inspect multinode-650956-m03 --format={{.State.Status}}
	I1028 11:29:35.169991  690327 status.go:371] multinode-650956-m03 host status = "Stopped" (err=<nil>)
	I1028 11:29:35.170017  690327 status.go:384] host is not running, skipping remaining checks
	I1028 11:29:35.170036  690327 status.go:176] multinode-650956-m03 status: &{Name:multinode-650956-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.11s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-650956 node start m03 -v=7 --alsologtostderr: (8.402775079s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-650956
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-650956
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-650956: (24.68121468s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-650956 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-650956 --wait=true -v=8 --alsologtostderr: (54.444004019s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-650956
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-650956 node delete m03: (4.410971354s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-650956 stop: (23.604023447s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-650956 status: exit status 7 (95.040664ms)

                                                
                                                
-- stdout --
	multinode-650956
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-650956-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-650956 status --alsologtostderr: exit status 7 (89.304183ms)

                                                
                                                
-- stdout --
	multinode-650956
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-650956-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:31:32.201571  699586 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:31:32.201720  699586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:31:32.201734  699586 out.go:358] Setting ErrFile to fd 2...
	I1028 11:31:32.201755  699586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:31:32.201944  699586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
	I1028 11:31:32.202127  699586 out.go:352] Setting JSON to false
	I1028 11:31:32.202161  699586 mustload.go:65] Loading cluster: multinode-650956
	I1028 11:31:32.202299  699586 notify.go:220] Checking for updates...
	I1028 11:31:32.202762  699586 config.go:182] Loaded profile config "multinode-650956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:31:32.202796  699586 status.go:174] checking status of multinode-650956 ...
	I1028 11:31:32.203351  699586 cli_runner.go:164] Run: docker container inspect multinode-650956 --format={{.State.Status}}
	I1028 11:31:32.221210  699586 status.go:371] multinode-650956 host status = "Stopped" (err=<nil>)
	I1028 11:31:32.221255  699586 status.go:384] host is not running, skipping remaining checks
	I1028 11:31:32.221267  699586 status.go:176] multinode-650956 status: &{Name:multinode-650956 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:31:32.221307  699586 status.go:174] checking status of multinode-650956-m02 ...
	I1028 11:31:32.221664  699586 cli_runner.go:164] Run: docker container inspect multinode-650956-m02 --format={{.State.Status}}
	I1028 11:31:32.239137  699586 status.go:371] multinode-650956-m02 host status = "Stopped" (err=<nil>)
	I1028 11:31:32.239183  699586 status.go:384] host is not running, skipping remaining checks
	I1028 11:31:32.239193  699586 status.go:176] multinode-650956-m02 status: &{Name:multinode-650956-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-650956 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1028 11:32:01.495301  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-650956 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (56.72911989s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650956 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.34s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-650956
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-650956-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-650956-m02 --driver=docker  --container-runtime=crio: exit status 14 (75.919551ms)

                                                
                                                
-- stdout --
	* [multinode-650956-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-533928/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-533928/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-650956-m02' is duplicated with machine name 'multinode-650956-m02' in profile 'multinode-650956'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-650956-m03 --driver=docker  --container-runtime=crio
E1028 11:32:42.328915  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-650956-m03 --driver=docker  --container-runtime=crio: (23.596200045s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-650956
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-650956: exit status 80 (269.168293ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-650956 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-650956-m03 already exists in multinode-650956-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-650956-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-650956-m03: (1.917150721s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.91s)

                                                
                                    
x
+
TestPreload (106.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-510223 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1028 11:33:24.562048  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-510223 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m19.936485601s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-510223 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-510223 image pull gcr.io/k8s-minikube/busybox: (2.09183746s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-510223
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-510223: (5.724491271s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-510223 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-510223 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (16.329256577s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-510223 image list
helpers_test.go:175: Cleaning up "test-preload-510223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-510223
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-510223: (2.349137691s)
--- PASS: TestPreload (106.65s)

                                                
                                    
x
+
TestScheduledStopUnix (100.21s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-498095 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-498095 --memory=2048 --driver=docker  --container-runtime=crio: (23.854000847s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-498095 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-498095 -n scheduled-stop-498095
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-498095 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1028 11:35:10.392316  541347 retry.go:31] will retry after 80.978µs: open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/scheduled-stop-498095/pid: no such file or directory
I1028 11:35:10.393497  541347 retry.go:31] will retry after 95.241µs: open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/scheduled-stop-498095/pid: no such file or directory
I1028 11:35:10.394675  541347 retry.go:31] will retry after 219.805µs: open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/scheduled-stop-498095/pid: no such file or directory
I1028 11:35:10.395826  541347 retry.go:31] will retry after 459.967µs: open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/scheduled-stop-498095/pid: no such file or directory
I1028 11:35:10.397017  541347 retry.go:31] will retry after 452.658µs: open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/scheduled-stop-498095/pid: no such file or directory
I1028 11:35:10.398171  541347 retry.go:31] will retry after 1.049639ms: open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/scheduled-stop-498095/pid: no such file or directory
I1028 11:35:10.399342  541347 retry.go:31] will retry after 969.481µs: open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/scheduled-stop-498095/pid: no such file or directory
I1028 11:35:10.400524  541347 retry.go:31] will retry after 2.508698ms: open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/scheduled-stop-498095/pid: no such file or directory
I1028 11:35:10.403785  541347 retry.go:31] will retry after 2.074009ms: open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/scheduled-stop-498095/pid: no such file or directory
I1028 11:35:10.406029  541347 retry.go:31] will retry after 5.101712ms: open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/scheduled-stop-498095/pid: no such file or directory
I1028 11:35:10.411240  541347 retry.go:31] will retry after 7.949579ms: open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/scheduled-stop-498095/pid: no such file or directory
I1028 11:35:10.419516  541347 retry.go:31] will retry after 7.281634ms: open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/scheduled-stop-498095/pid: no such file or directory
I1028 11:35:10.427842  541347 retry.go:31] will retry after 16.344927ms: open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/scheduled-stop-498095/pid: no such file or directory
I1028 11:35:10.445198  541347 retry.go:31] will retry after 13.632944ms: open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/scheduled-stop-498095/pid: no such file or directory
I1028 11:35:10.459493  541347 retry.go:31] will retry after 32.130742ms: open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/scheduled-stop-498095/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-498095 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-498095 -n scheduled-stop-498095
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-498095
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-498095 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-498095
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-498095: exit status 7 (73.546239ms)

                                                
                                                
-- stdout --
	scheduled-stop-498095
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-498095 -n scheduled-stop-498095
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-498095 -n scheduled-stop-498095: exit status 7 (72.695189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-498095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-498095
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-498095: (4.985079371s)
--- PASS: TestScheduledStopUnix (100.21s)

                                                
                                    
x
+
TestInsufficientStorage (12.87s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-847884 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-847884 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.484902922s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5fe4abd5-fee3-4706-a9bb-0edd5ecdbefe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-847884] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8079d152-1a90-40b8-8a1c-6f7081006e8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19876"}}
	{"specversion":"1.0","id":"a35a5298-1808-40a1-a894-b394ec331bf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"20405d1f-abfa-4d4a-8ce1-d3465308a759","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19876-533928/kubeconfig"}}
	{"specversion":"1.0","id":"d8af2f7e-8d59-452e-a2f4-62a8cca8d9b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-533928/.minikube"}}
	{"specversion":"1.0","id":"1456e34e-2fd2-4133-9c72-6dc52167ef29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d97c259f-fc54-47a5-bf70-7984ec376dd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7023a3fc-6e22-44c9-9bfd-5c97659be8d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"9974c7de-b9aa-43aa-8cd1-826de6ed7c9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"05faee7f-b586-46ce-9a45-1e6bf3fdd1d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b2749e40-990a-43cb-bfa0-63cfc822b306","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"057f1c55-6e2f-48e0-ad07-c4a18806e1d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-847884\" primary control-plane node in \"insufficient-storage-847884\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"95fa7f6a-6a74-466b-b5a3-db0e0c934625","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1729876044-19868 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a8e4868-5bc1-45c8-a4fe-2a4579701ef0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1903cf99-ab29-4657-a60a-9d9bc78da61a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-847884 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-847884 --output=json --layout=cluster: exit status 7 (270.403497ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-847884","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-847884","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 11:36:37.081928  721848 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-847884" does not appear in /home/jenkins/minikube-integration/19876-533928/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-847884 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-847884 --output=json --layout=cluster: exit status 7 (263.731786ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-847884","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-847884","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 11:36:37.345384  721946 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-847884" does not appear in /home/jenkins/minikube-integration/19876-533928/kubeconfig
	E1028 11:36:37.356127  721946 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/insufficient-storage-847884/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-847884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-847884
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-847884: (1.852436994s)
--- PASS: TestInsufficientStorage (12.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (127.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1084606292 start -p running-upgrade-857014 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1028 11:37:01.496383  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1084606292 start -p running-upgrade-857014 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m12.887688426s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-857014 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-857014 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.759413327s)
helpers_test.go:175: Cleaning up "running-upgrade-857014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-857014
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-857014: (2.516950449s)
--- PASS: TestRunningBinaryUpgrade (127.60s)

                                                
                                    
x
+
TestKubernetesUpgrade (362.64s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-533554 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-533554 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.426298259s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-533554
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-533554: (1.340631593s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-533554 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-533554 status --format={{.Host}}: exit status 7 (100.423813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-533554 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1028 11:37:42.328649  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-533554 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.508454908s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-533554 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-533554 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-533554 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (80.521542ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-533554] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-533928/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-533928/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-533554
	    minikube start -p kubernetes-upgrade-533554 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5335542 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-533554 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-533554 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-533554 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.695647768s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-533554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-533554
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-533554: (2.422181929s)
--- PASS: TestKubernetesUpgrade (362.64s)

                                                
                                    
x
+
TestMissingContainerUpgrade (94.67s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1387799010 start -p missing-upgrade-541786 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1387799010 start -p missing-upgrade-541786 --memory=2200 --driver=docker  --container-runtime=crio: (30.260092594s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-541786
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-541786: (13.13889408s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-541786
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-541786 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-541786 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.150178719s)
helpers_test.go:175: Cleaning up "missing-upgrade-541786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-541786
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-541786: (4.608822747s)
--- PASS: TestMissingContainerUpgrade (94.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (109s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.301322792 start -p stopped-upgrade-736544 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.301322792 start -p stopped-upgrade-736544 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m14.067815385s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.301322792 -p stopped-upgrade-736544 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.301322792 -p stopped-upgrade-736544 stop: (4.014487308s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-736544 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-736544 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.91731428s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (109.00s)

                                                
                                    
x
+
TestPause/serial/Start (46.9s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-642948 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-642948 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (46.904116422s)
--- PASS: TestPause/serial/Start (46.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-736544
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-736544: (1.001355032s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-372129 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-372129 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (113.379974ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-372129] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-533928/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-533928/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-372129 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-372129 --driver=docker  --container-runtime=crio: (25.429814926s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-372129 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-345979 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-345979 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (185.661719ms)

                                                
                                                
-- stdout --
	* [false-345979] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-533928/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-533928/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:38:50.345568  749778 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:38:50.345889  749778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:38:50.345899  749778 out.go:358] Setting ErrFile to fd 2...
	I1028 11:38:50.345906  749778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:38:50.346187  749778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-533928/.minikube/bin
	I1028 11:38:50.346966  749778 out.go:352] Setting JSON to false
	I1028 11:38:50.348531  749778 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12074,"bootTime":1730103456,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:38:50.348695  749778 start.go:139] virtualization: kvm guest
	I1028 11:38:50.351314  749778 out.go:177] * [false-345979] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:38:50.352884  749778 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:38:50.352917  749778 notify.go:220] Checking for updates...
	I1028 11:38:50.357478  749778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:38:50.358948  749778 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-533928/kubeconfig
	I1028 11:38:50.360416  749778 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-533928/.minikube
	I1028 11:38:50.361994  749778 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:38:50.363529  749778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:38:50.365630  749778 config.go:182] Loaded profile config "NoKubernetes-372129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:38:50.365838  749778 config.go:182] Loaded profile config "kubernetes-upgrade-533554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:38:50.365995  749778 config.go:182] Loaded profile config "pause-642948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:38:50.366216  749778 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:38:50.393825  749778 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 11:38:50.394029  749778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 11:38:50.457490  749778 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:75 SystemTime:2024-10-28 11:38:50.445269009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1028 11:38:50.457617  749778 docker.go:318] overlay module found
	I1028 11:38:50.459698  749778 out.go:177] * Using the docker driver based on user configuration
	I1028 11:38:50.461370  749778 start.go:297] selected driver: docker
	I1028 11:38:50.461390  749778 start.go:901] validating driver "docker" against <nil>
	I1028 11:38:50.461404  749778 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:38:50.463750  749778 out.go:201] 
	W1028 11:38:50.465159  749778 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1028 11:38:50.466459  749778 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-345979 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-345979

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-345979

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-345979

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-345979

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-345979

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-345979

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-345979

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-345979

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-345979

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-345979

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-345979

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-345979" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-345979" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19876-533928/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 11:37:46 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-533554
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19876-533928/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 11:38:40 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-642948
contexts:
- context:
cluster: kubernetes-upgrade-533554
user: kubernetes-upgrade-533554
name: kubernetes-upgrade-533554
- context:
cluster: pause-642948
extensions:
- extension:
last-update: Mon, 28 Oct 2024 11:38:40 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-642948
name: pause-642948
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-533554
user:
client-certificate: /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/kubernetes-upgrade-533554/client.crt
client-key: /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/kubernetes-upgrade-533554/client.key
- name: pause-642948
user:
client-certificate: /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/pause-642948/client.crt
client-key: /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/pause-642948/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-345979

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345979"

                                                
                                                
----------------------- debugLogs end: false-345979 [took: 3.531377593s] --------------------------------
helpers_test.go:175: Cleaning up "false-345979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-345979
--- PASS: TestNetworkPlugins/group/false (3.89s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.98s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-642948 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-642948 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.961315006s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-372129 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-372129 --no-kubernetes --driver=docker  --container-runtime=crio: (6.146837941s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-372129 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-372129 status -o json: exit status 2 (323.368357ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-372129","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-372129
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-372129: (2.024336765s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-372129 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-372129 --no-kubernetes --driver=docker  --container-runtime=crio: (7.835446225s)
--- PASS: TestNoKubernetes/serial/Start (7.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-372129 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-372129 "sudo systemctl is-active --quiet service kubelet": exit status 1 (263.037555ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.615268222s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-372129
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-372129: (1.221989756s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-372129 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-372129 --driver=docker  --container-runtime=crio: (8.755653412s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.76s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-642948 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-642948 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-642948 --output=json --layout=cluster: exit status 2 (319.644807ms)

                                                
                                                
-- stdout --
	{"Name":"pause-642948","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-642948","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-642948 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-642948 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.44s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-642948 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-642948 --alsologtostderr -v=5: (3.437101921s)
--- PASS: TestPause/serial/DeletePaused (3.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-372129 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-372129 "sudo systemctl is-active --quiet service kubelet": exit status 1 (288.937651ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.381603042s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-642948
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-642948: exit status 1 (25.229776ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-642948: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (133.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-747812 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-747812 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m13.133019512s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (133.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-393935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1028 11:42:01.494741  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-393935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (54.831559334s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-747812 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [477fe729-1a71-4432-8899-aebad942d565] Pending
helpers_test.go:344: "busybox" [477fe729-1a71-4432-8899-aebad942d565] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [477fe729-1a71-4432-8899-aebad942d565] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003466668s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-747812 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-393935 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4938b954-9f07-4d56-8670-55e424248c0e] Pending
helpers_test.go:344: "busybox" [4938b954-9f07-4d56-8670-55e424248c0e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4938b954-9f07-4d56-8670-55e424248c0e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003645483s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-393935 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-747812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-747812 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-747812 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-747812 --alsologtostderr -v=3: (11.856064011s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-393935 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-393935 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (14.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-393935 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-393935 --alsologtostderr -v=3: (14.427306829s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (14.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-747812 -n old-k8s-version-747812
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-747812 -n old-k8s-version-747812: exit status 7 (69.907401ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-747812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (149.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-747812 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-747812 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m29.016399789s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-747812 -n old-k8s-version-747812
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (149.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (73.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-490109 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1028 11:42:42.328727  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-490109 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (1m13.772450698s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (73.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-393935 -n no-preload-393935
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-393935 -n no-preload-393935: exit status 7 (98.626752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-393935 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-393935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-393935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m22.89490871s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-393935 -n no-preload-393935
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-659666 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-659666 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (42.378681952s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-490109 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [21240770-16e7-43d3-a124-8b47d9dcfc41] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [21240770-16e7-43d3-a124-8b47d9dcfc41] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005053629s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-490109 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-659666 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [54b9a13e-814a-4a81-a224-c09595a692fe] Pending
helpers_test.go:344: "busybox" [54b9a13e-814a-4a81-a224-c09595a692fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [54b9a13e-814a-4a81-a224-c09595a692fe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00460262s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-659666 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-490109 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-490109 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-490109 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-490109 --alsologtostderr -v=3: (11.948810395s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-659666 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-659666 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-659666 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-659666 --alsologtostderr -v=3: (11.830036314s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-490109 -n embed-certs-490109
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-490109 -n embed-certs-490109: exit status 7 (89.566731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-490109 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-490109 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-490109 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m22.452243922s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-490109 -n embed-certs-490109
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-659666 -n default-k8s-diff-port-659666
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-659666 -n default-k8s-diff-port-659666: exit status 7 (76.79688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-659666 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-659666 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-659666 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m22.885321674s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-659666 -n default-k8s-diff-port-659666
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zfm7n" [f8367ae0-5a60-42fd-87cb-636f786db12b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00375946s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zfm7n" [f8367ae0-5a60-42fd-87cb-636f786db12b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004224616s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-747812 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-747812 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-747812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-747812 -n old-k8s-version-747812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-747812 -n old-k8s-version-747812: exit status 2 (295.892564ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-747812 -n old-k8s-version-747812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-747812 -n old-k8s-version-747812: exit status 2 (296.886072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-747812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-747812 -n old-k8s-version-747812
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-747812 -n old-k8s-version-747812
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-353052 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1028 11:45:45.395590  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-353052 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (26.449492322s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-353052 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-353052 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-353052 --alsologtostderr -v=3: (1.203735236s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-353052 -n newest-cni-353052
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-353052 -n newest-cni-353052: exit status 7 (75.343683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-353052 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-353052 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-353052 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (12.594928174s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-353052 -n newest-cni-353052
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-353052 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-353052 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-353052 -n newest-cni-353052
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-353052 -n newest-cni-353052: exit status 2 (298.523683ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-353052 -n newest-cni-353052
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-353052 -n newest-cni-353052: exit status 2 (295.32022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-353052 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-353052 -n newest-cni-353052
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-353052 -n newest-cni-353052
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-345979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-345979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.26910797s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-345979 "pgrep -a kubelet"
I1028 11:46:51.483949  541347 config.go:182] Loaded profile config "auto-345979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-345979 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-67lzb" [9aab3ae4-be97-4395-9b65-1de303fd9055] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-67lzb" [9aab3ae4-be97-4395-9b65-1de303fd9055] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.003936724s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-345979 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-345979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-345979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-llgfg" [b97b4ca8-24a0-4de5-909b-4d6500545242] Running
E1028 11:47:13.779297  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:47:13.785650  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:47:13.797104  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:47:13.818560  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:47:13.860491  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:47:13.942002  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:47:14.103850  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:47:14.425633  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:47:15.067374  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004334089s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-llgfg" [b97b4ca8-24a0-4de5-909b-4d6500545242] Running
E1028 11:47:16.349542  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004687479s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-393935 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-345979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1028 11:47:18.911071  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-345979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m15.249481131s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-393935 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-393935 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-393935 -n no-preload-393935
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-393935 -n no-preload-393935: exit status 2 (318.288238ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-393935 -n no-preload-393935
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-393935 -n no-preload-393935: exit status 2 (298.521544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-393935 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-393935 -n no-preload-393935
E1028 11:47:24.032908  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-393935 -n no-preload-393935
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-345979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1028 11:47:34.274305  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:47:42.328475  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/addons-673472/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:47:54.756169  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-345979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (51.026437582s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rnj86" [219feed6-82f9-4321-b209-40e517ed68ed] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004572329s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-345979 "pgrep -a kubelet"
I1028 11:48:24.990725  541347 config.go:182] Loaded profile config "calico-345979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-345979 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nxbpw" [6bfd309d-ab9d-4f7b-9dbf-6c8d2addfbce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nxbpw" [6bfd309d-ab9d-4f7b-9dbf-6c8d2addfbce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004195283s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-hrsq7" [072b0e71-b581-4319-9031-64a0865f9d7f] Running
E1028 11:48:35.717663  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003898996s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-345979 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-345979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-345979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-345979 "pgrep -a kubelet"
I1028 11:48:39.747744  541347 config.go:182] Loaded profile config "kindnet-345979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-345979 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fvp64" [1873fbeb-d3fb-4591-af9c-97fb1134a3b2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fvp64" [1873fbeb-d3fb-4591-af9c-97fb1134a3b2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004781996s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dvrnz" [083e9bd8-088e-4fdf-be31-a9c24d3c9dfe] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004154503s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-k4spv" [e8aa4fc1-07d7-4fc6-94c0-c6c978f1f6fc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003908557s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dvrnz" [083e9bd8-088e-4fdf-be31-a9c24d3c9dfe] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004485804s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-490109 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-k4spv" [e8aa4fc1-07d7-4fc6-94c0-c6c978f1f6fc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004311697s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-659666 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-345979 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-345979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-345979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-490109 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-490109 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-490109 -n embed-certs-490109
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-490109 -n embed-certs-490109: exit status 2 (316.330891ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-490109 -n embed-certs-490109
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-490109 -n embed-certs-490109: exit status 2 (339.484384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-490109 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-490109 -n embed-certs-490109
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-490109 -n embed-certs-490109
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-659666 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-659666 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-659666 -n default-k8s-diff-port-659666
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-659666 -n default-k8s-diff-port-659666: exit status 2 (416.860107ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-659666 -n default-k8s-diff-port-659666
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-659666 -n default-k8s-diff-port-659666: exit status 2 (345.654068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-659666 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-659666 -n default-k8s-diff-port-659666
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-659666 -n default-k8s-diff-port-659666
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-345979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-345979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (53.232574597s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-345979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-345979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m15.446346639s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-345979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-345979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (53.521967619s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (66.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-345979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-345979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m6.666577142s)
--- PASS: TestNetworkPlugins/group/bridge/Start (66.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-345979 "pgrep -a kubelet"
I1028 11:49:49.476870  541347 config.go:182] Loaded profile config "custom-flannel-345979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-345979 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-q4wjz" [ffe8c57c-2ebd-400b-a988-a626577d0014] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-q4wjz" [ffe8c57c-2ebd-400b-a988-a626577d0014] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004476872s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jq2qp" [45eaae83-513b-4a59-bce2-fc1708b857d6] Running
E1028 11:49:57.639484  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/old-k8s-version-747812/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006601266s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-345979 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-345979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-345979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-345979 "pgrep -a kubelet"
I1028 11:50:02.121902  541347 config.go:182] Loaded profile config "flannel-345979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-345979 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qxc6d" [28377a41-65e4-4762-9a00-cc47e4db66c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1028 11:50:04.564014  541347 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/functional-607680/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-qxc6d" [28377a41-65e4-4762-9a00-cc47e4db66c3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003155161s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-345979 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-345979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-345979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-345979 "pgrep -a kubelet"
I1028 11:50:14.090019  541347 config.go:182] Loaded profile config "enable-default-cni-345979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-345979 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7bnlx" [1141bc40-4d46-459b-bef9-ccf83084abd2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7bnlx" [1141bc40-4d46-459b-bef9-ccf83084abd2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003615565s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-345979 "pgrep -a kubelet"
I1028 11:50:19.212692  541347 config.go:182] Loaded profile config "bridge-345979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-345979 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mcbbf" [d0171820-691c-4642-8136-347fc5743da3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mcbbf" [d0171820-691c-4642-8136-347fc5743da3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00402518s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-345979 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-345979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-345979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-345979 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-345979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-345979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (26/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-673472 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-792404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-792404
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-345979 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-345979

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-345979

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-345979

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-345979

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-345979

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-345979

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-345979

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-345979

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-345979

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-345979

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-345979

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-345979" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-345979" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19876-533928/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 11:37:46 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-533554
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19876-533928/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 11:38:40 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-642948
contexts:
- context:
cluster: kubernetes-upgrade-533554
user: kubernetes-upgrade-533554
name: kubernetes-upgrade-533554
- context:
cluster: pause-642948
extensions:
- extension:
last-update: Mon, 28 Oct 2024 11:38:40 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-642948
name: pause-642948
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-533554
user:
client-certificate: /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/kubernetes-upgrade-533554/client.crt
client-key: /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/kubernetes-upgrade-533554/client.key
- name: pause-642948
user:
client-certificate: /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/pause-642948/client.crt
client-key: /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/pause-642948/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-345979

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345979"

                                                
                                                
----------------------- debugLogs end: kubenet-345979 [took: 3.259524495s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-345979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-345979
--- SKIP: TestNetworkPlugins/group/kubenet (3.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-345979 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-345979" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-345979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-345979" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19876-533928/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 11:38:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-372129
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19876-533928/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 11:37:46 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-533554
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19876-533928/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 11:38:40 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-642948
contexts:
- context:
cluster: NoKubernetes-372129
extensions:
- extension:
last-update: Mon, 28 Oct 2024 11:38:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: NoKubernetes-372129
name: NoKubernetes-372129
- context:
cluster: kubernetes-upgrade-533554
user: kubernetes-upgrade-533554
name: kubernetes-upgrade-533554
- context:
cluster: pause-642948
extensions:
- extension:
last-update: Mon, 28 Oct 2024 11:38:40 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-642948
name: pause-642948
current-context: NoKubernetes-372129
kind: Config
preferences: {}
users:
- name: NoKubernetes-372129
user:
client-certificate: /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/NoKubernetes-372129/client.crt
client-key: /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/NoKubernetes-372129/client.key
- name: kubernetes-upgrade-533554
user:
client-certificate: /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/kubernetes-upgrade-533554/client.crt
client-key: /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/kubernetes-upgrade-533554/client.key
- name: pause-642948
user:
client-certificate: /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/pause-642948/client.crt
client-key: /home/jenkins/minikube-integration/19876-533928/.minikube/profiles/pause-642948/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-345979

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-345979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345979"

                                                
                                                
----------------------- debugLogs end: cilium-345979 [took: 3.905421178s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-345979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-345979
--- SKIP: TestNetworkPlugins/group/cilium (4.10s)

                                                
                                    
Copied to clipboard