Test Report: Docker_Linux_crio 20090

                    
                      20ecd3658b86897ae797acf630cebadf77816c63:2024-12-13:37470
                    
                

Test fail (2/330)

Order failed test Duration
36 TestAddons/parallel/Ingress 156.7
38 TestAddons/parallel/MetricsServer 358.87
x
+
TestAddons/parallel/Ingress (156.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-237678 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-237678 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-237678 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [14507dff-e4be-455e-948f-6e6e30d86663] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [14507dff-e4be-455e-948f-6e6e30d86663] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.002885592s
I1213 19:05:49.787630   22695 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-237678 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.849317713s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-237678 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-237678
helpers_test.go:235: (dbg) docker inspect addons-237678:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aecc65461015a4bfe50052a4966ccaf7f88de0d25a6fe3fd4f4d3fbfa3731c03",
	        "Created": "2024-12-13T19:02:37.256027583Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 24880,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-13T19:02:37.391578376Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d489d36b1c808fdb46955d21247b1ea12cf0c774bbaa5d6d4f9ce6979fd65009",
	        "ResolvConfPath": "/var/lib/docker/containers/aecc65461015a4bfe50052a4966ccaf7f88de0d25a6fe3fd4f4d3fbfa3731c03/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aecc65461015a4bfe50052a4966ccaf7f88de0d25a6fe3fd4f4d3fbfa3731c03/hostname",
	        "HostsPath": "/var/lib/docker/containers/aecc65461015a4bfe50052a4966ccaf7f88de0d25a6fe3fd4f4d3fbfa3731c03/hosts",
	        "LogPath": "/var/lib/docker/containers/aecc65461015a4bfe50052a4966ccaf7f88de0d25a6fe3fd4f4d3fbfa3731c03/aecc65461015a4bfe50052a4966ccaf7f88de0d25a6fe3fd4f4d3fbfa3731c03-json.log",
	        "Name": "/addons-237678",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-237678:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-237678",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c95ca03e345dbfed2a087b9539313895733f40e5feaa702260d1f7acbd639d7b-init/diff:/var/lib/docker/overlay2/f762192c552406e923de3fcb2db2756770325685c188638c13eb19bc257f7ea1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c95ca03e345dbfed2a087b9539313895733f40e5feaa702260d1f7acbd639d7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c95ca03e345dbfed2a087b9539313895733f40e5feaa702260d1f7acbd639d7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c95ca03e345dbfed2a087b9539313895733f40e5feaa702260d1f7acbd639d7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-237678",
	                "Source": "/var/lib/docker/volumes/addons-237678/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-237678",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-237678",
	                "name.minikube.sigs.k8s.io": "addons-237678",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9787fc3be071ca8f943d62019dfabb149f8a0d20a3c8529f454e950668f8d26c",
	            "SandboxKey": "/var/run/docker/netns/9787fc3be071",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-237678": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "5eb4d2d8f8dd4d490eb0db6ef731064c7679e08089bdcf32fc89ea4ea2086677",
	                    "EndpointID": "0d2f33bbcdfe3f215b860828d7058288c48a373d7c50cff3e3c5c7c4a8e5ba90",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-237678",
	                        "aecc65461015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-237678 -n addons-237678
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-237678 logs -n 25: (1.123627211s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-333411                                                                     | download-only-333411   | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:02 UTC |
	| start   | --download-only -p                                                                          | download-docker-509470 | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC |                     |
	|         | download-docker-509470                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-509470                                                                   | download-docker-509470 | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:02 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-428326   | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC |                     |
	|         | binary-mirror-428326                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41935                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-428326                                                                     | binary-mirror-428326   | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:02 UTC |
	| addons  | enable dashboard -p                                                                         | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC |                     |
	|         | addons-237678                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC |                     |
	|         | addons-237678                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-237678 --wait=true                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | -p addons-237678                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-237678 addons                                                                        | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-237678 ip                                                                            | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-237678 ssh curl -s                                                                   | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ssh     | addons-237678 ssh cat                                                                       | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | /opt/local-path-provisioner/pvc-44be87ee-926f-4202-9a14-cc59be04dc06_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:06 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-237678 addons                                                                        | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-237678 addons                                                                        | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-237678 addons                                                                        | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-237678 addons                                                                        | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-237678 ip                                                                            | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:08 UTC | 13 Dec 24 19:08 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 19:02:14
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:02:14.869381   24117 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:02:14.869508   24117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:02:14.869518   24117 out.go:358] Setting ErrFile to fd 2...
	I1213 19:02:14.869524   24117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:02:14.869704   24117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
	I1213 19:02:14.870288   24117 out.go:352] Setting JSON to false
	I1213 19:02:14.871089   24117 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2679,"bootTime":1734113856,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 19:02:14.871187   24117 start.go:139] virtualization: kvm guest
	I1213 19:02:14.873343   24117 out.go:177] * [addons-237678] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 19:02:14.874776   24117 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 19:02:14.874771   24117 notify.go:220] Checking for updates...
	I1213 19:02:14.876218   24117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:02:14.877612   24117 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-15903/kubeconfig
	I1213 19:02:14.878886   24117 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-15903/.minikube
	I1213 19:02:14.880245   24117 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 19:02:14.881525   24117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:02:14.882948   24117 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:02:14.903737   24117 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1213 19:02:14.903863   24117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:02:14.949872   24117 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-13 19:02:14.941037928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 19:02:14.949972   24117 docker.go:318] overlay module found
	I1213 19:02:14.952952   24117 out.go:177] * Using the docker driver based on user configuration
	I1213 19:02:14.954455   24117 start.go:297] selected driver: docker
	I1213 19:02:14.954468   24117 start.go:901] validating driver "docker" against <nil>
	I1213 19:02:14.954479   24117 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:02:14.955217   24117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:02:15.001239   24117 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-13 19:02:14.991614924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 19:02:15.001475   24117 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 19:02:15.001716   24117 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:02:15.003482   24117 out.go:177] * Using Docker driver with root privileges
	I1213 19:02:15.004742   24117 cni.go:84] Creating CNI manager for ""
	I1213 19:02:15.004811   24117 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:02:15.004826   24117 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 19:02:15.004896   24117 start.go:340] cluster config:
	{Name:addons-237678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-237678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:02:15.006206   24117 out.go:177] * Starting "addons-237678" primary control-plane node in "addons-237678" cluster
	I1213 19:02:15.007232   24117 cache.go:121] Beginning downloading kic base image for docker with crio
	I1213 19:02:15.008465   24117 out.go:177] * Pulling base image v0.0.45-1734029593-20090 ...
	I1213 19:02:15.009629   24117 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:02:15.009655   24117 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon
	I1213 19:02:15.009674   24117 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-15903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1213 19:02:15.009686   24117 cache.go:56] Caching tarball of preloaded images
	I1213 19:02:15.009776   24117 preload.go:172] Found /home/jenkins/minikube-integration/20090-15903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 19:02:15.009791   24117 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1213 19:02:15.010129   24117 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/config.json ...
	I1213 19:02:15.010155   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/config.json: {Name:mk08cc8c3b1749a2d5b51432634b107fe06d2d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:15.025309   24117 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 to local cache
	I1213 19:02:15.025419   24117 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory
	I1213 19:02:15.025434   24117 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory, skipping pull
	I1213 19:02:15.025438   24117 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 exists in cache, skipping pull
	I1213 19:02:15.025447   24117 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 as a tarball
	I1213 19:02:15.025455   24117 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 from local cache
	I1213 19:02:27.446756   24117 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 from cached tarball
	I1213 19:02:27.446791   24117 cache.go:194] Successfully downloaded all kic artifacts
	I1213 19:02:27.446820   24117 start.go:360] acquireMachinesLock for addons-237678: {Name:mk9d17c191be779336b39fc07058cf7c6bc54007 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:02:27.446913   24117 start.go:364] duration metric: took 75.192µs to acquireMachinesLock for "addons-237678"
	I1213 19:02:27.446954   24117 start.go:93] Provisioning new machine with config: &{Name:addons-237678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-237678 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:02:27.447043   24117 start.go:125] createHost starting for "" (driver="docker")
	I1213 19:02:27.448954   24117 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1213 19:02:27.449165   24117 start.go:159] libmachine.API.Create for "addons-237678" (driver="docker")
	I1213 19:02:27.449192   24117 client.go:168] LocalClient.Create starting
	I1213 19:02:27.449279   24117 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca.pem
	I1213 19:02:27.608485   24117 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/cert.pem
	I1213 19:02:27.762981   24117 cli_runner.go:164] Run: docker network inspect addons-237678 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 19:02:27.779101   24117 cli_runner.go:211] docker network inspect addons-237678 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 19:02:27.779175   24117 network_create.go:284] running [docker network inspect addons-237678] to gather additional debugging logs...
	I1213 19:02:27.779204   24117 cli_runner.go:164] Run: docker network inspect addons-237678
	W1213 19:02:27.794856   24117 cli_runner.go:211] docker network inspect addons-237678 returned with exit code 1
	I1213 19:02:27.794895   24117 network_create.go:287] error running [docker network inspect addons-237678]: docker network inspect addons-237678: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-237678 not found
	I1213 19:02:27.794912   24117 network_create.go:289] output of [docker network inspect addons-237678]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-237678 not found
	
	** /stderr **
	I1213 19:02:27.795000   24117 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:02:27.810774   24117 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a8e350}
	I1213 19:02:27.810816   24117 network_create.go:124] attempt to create docker network addons-237678 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 19:02:27.810853   24117 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-237678 addons-237678
	I1213 19:02:28.157571   24117 network_create.go:108] docker network addons-237678 192.168.49.0/24 created
	I1213 19:02:28.157596   24117 kic.go:121] calculated static IP "192.168.49.2" for the "addons-237678" container
	I1213 19:02:28.157661   24117 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 19:02:28.172992   24117 cli_runner.go:164] Run: docker volume create addons-237678 --label name.minikube.sigs.k8s.io=addons-237678 --label created_by.minikube.sigs.k8s.io=true
	I1213 19:02:28.211186   24117 oci.go:103] Successfully created a docker volume addons-237678
	I1213 19:02:28.211293   24117 cli_runner.go:164] Run: docker run --rm --name addons-237678-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-237678 --entrypoint /usr/bin/test -v addons-237678:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -d /var/lib
	I1213 19:02:32.655361   24117 cli_runner.go:217] Completed: docker run --rm --name addons-237678-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-237678 --entrypoint /usr/bin/test -v addons-237678:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -d /var/lib: (4.444030941s)
	I1213 19:02:32.655389   24117 oci.go:107] Successfully prepared a docker volume addons-237678
	I1213 19:02:32.655405   24117 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:02:32.655422   24117 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 19:02:32.655467   24117 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20090-15903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-237678:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 19:02:37.199678   24117 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20090-15903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-237678:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -I lz4 -xf /preloaded.tar -C /extractDir: (4.544174785s)
	I1213 19:02:37.199708   24117 kic.go:203] duration metric: took 4.544281704s to extract preloaded images to volume ...
	W1213 19:02:37.199838   24117 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 19:02:37.199960   24117 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 19:02:37.241444   24117 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-237678 --name addons-237678 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-237678 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-237678 --network addons-237678 --ip 192.168.49.2 --volume addons-237678:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9
	I1213 19:02:37.571702   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Running}}
	I1213 19:02:37.589340   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:37.608011   24117 cli_runner.go:164] Run: docker exec addons-237678 stat /var/lib/dpkg/alternatives/iptables
	I1213 19:02:37.647850   24117 oci.go:144] the created container "addons-237678" has a running status.
	I1213 19:02:37.647886   24117 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa...
	I1213 19:02:37.875540   24117 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 19:02:37.897987   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:37.919387   24117 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 19:02:37.919413   24117 kic_runner.go:114] Args: [docker exec --privileged addons-237678 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 19:02:38.017417   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:38.038563   24117 machine.go:93] provisionDockerMachine start ...
	I1213 19:02:38.038656   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:38.057310   24117 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:38.057504   24117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 19:02:38.057517   24117 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 19:02:38.198760   24117 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-237678
	
	I1213 19:02:38.198800   24117 ubuntu.go:169] provisioning hostname "addons-237678"
	I1213 19:02:38.198866   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:38.217589   24117 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:38.217770   24117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 19:02:38.217784   24117 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-237678 && echo "addons-237678" | sudo tee /etc/hostname
	I1213 19:02:38.369038   24117 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-237678
	
	I1213 19:02:38.369100   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:38.386556   24117 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:38.386757   24117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 19:02:38.386781   24117 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-237678' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-237678/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-237678' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:02:38.519356   24117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:02:38.519382   24117 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20090-15903/.minikube CaCertPath:/home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20090-15903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20090-15903/.minikube}
	I1213 19:02:38.519421   24117 ubuntu.go:177] setting up certificates
	I1213 19:02:38.519433   24117 provision.go:84] configureAuth start
	I1213 19:02:38.519483   24117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-237678
	I1213 19:02:38.536086   24117 provision.go:143] copyHostCerts
	I1213 19:02:38.536151   24117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20090-15903/.minikube/ca.pem (1078 bytes)
	I1213 19:02:38.536252   24117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20090-15903/.minikube/cert.pem (1123 bytes)
	I1213 19:02:38.536317   24117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20090-15903/.minikube/key.pem (1675 bytes)
	I1213 19:02:38.536371   24117 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20090-15903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca-key.pem org=jenkins.addons-237678 san=[127.0.0.1 192.168.49.2 addons-237678 localhost minikube]
	I1213 19:02:38.629249   24117 provision.go:177] copyRemoteCerts
	I1213 19:02:38.629309   24117 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:02:38.629342   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:38.646327   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:38.743189   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 19:02:38.764090   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 19:02:38.784954   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 19:02:38.806073   24117 provision.go:87] duration metric: took 286.618153ms to configureAuth
	I1213 19:02:38.806103   24117 ubuntu.go:193] setting minikube options for container-runtime
	I1213 19:02:38.806267   24117 config.go:182] Loaded profile config "addons-237678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:02:38.806357   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:38.822926   24117 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:38.823106   24117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 19:02:38.823125   24117 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:02:39.041385   24117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:02:39.041405   24117 machine.go:96] duration metric: took 1.002820912s to provisionDockerMachine
	I1213 19:02:39.041415   24117 client.go:171] duration metric: took 11.592217765s to LocalClient.Create
	I1213 19:02:39.041426   24117 start.go:167] duration metric: took 11.592262718s to libmachine.API.Create "addons-237678"
	I1213 19:02:39.041432   24117 start.go:293] postStartSetup for "addons-237678" (driver="docker")
	I1213 19:02:39.041441   24117 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:02:39.041484   24117 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:02:39.041518   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:39.058582   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:39.155480   24117 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:02:39.158701   24117 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:02:39.158732   24117 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1213 19:02:39.158740   24117 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1213 19:02:39.158749   24117 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1213 19:02:39.158764   24117 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-15903/.minikube/addons for local assets ...
	I1213 19:02:39.158819   24117 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-15903/.minikube/files for local assets ...
	I1213 19:02:39.158846   24117 start.go:296] duration metric: took 117.408146ms for postStartSetup
	I1213 19:02:39.159139   24117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-237678
	I1213 19:02:39.176017   24117 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/config.json ...
	I1213 19:02:39.176258   24117 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:02:39.176302   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:39.193919   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:39.287765   24117 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:02:39.291735   24117 start.go:128] duration metric: took 11.844676452s to createHost
	I1213 19:02:39.291762   24117 start.go:83] releasing machines lock for "addons-237678", held for 11.844837676s
	I1213 19:02:39.291828   24117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-237678
	I1213 19:02:39.308742   24117 ssh_runner.go:195] Run: cat /version.json
	I1213 19:02:39.308794   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:39.308823   24117 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:02:39.308892   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:39.327200   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:39.327775   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:39.418859   24117 ssh_runner.go:195] Run: systemctl --version
	I1213 19:02:39.422729   24117 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:02:39.559873   24117 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 19:02:39.563931   24117 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:02:39.580847   24117 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1213 19:02:39.580935   24117 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:02:39.606376   24117 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1213 19:02:39.606398   24117 start.go:495] detecting cgroup driver to use...
	I1213 19:02:39.606425   24117 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:02:39.606461   24117 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:02:39.619458   24117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:02:39.629006   24117 docker.go:217] disabling cri-docker service (if available) ...
	I1213 19:02:39.629051   24117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:02:39.640415   24117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:02:39.652480   24117 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:02:39.724111   24117 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:02:39.800484   24117 docker.go:233] disabling docker service ...
	I1213 19:02:39.800548   24117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:02:39.817683   24117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:02:39.827869   24117 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:02:39.900341   24117 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:02:39.980660   24117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:02:39.990327   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:02:40.005441   24117 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1213 19:02:40.005503   24117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:40.013979   24117 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:02:40.014039   24117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:40.022604   24117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:40.031296   24117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:40.039734   24117 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:02:40.047895   24117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:40.056094   24117 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:40.069660   24117 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:40.077947   24117 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:02:40.085058   24117 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 19:02:40.085115   24117 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 19:02:40.097525   24117 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:02:40.105057   24117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:02:40.174877   24117 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:02:40.275535   24117 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:02:40.275605   24117 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:02:40.278616   24117 start.go:563] Will wait 60s for crictl version
	I1213 19:02:40.278661   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:02:40.281538   24117 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 19:02:40.312723   24117 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1213 19:02:40.312812   24117 ssh_runner.go:195] Run: crio --version
	I1213 19:02:40.346272   24117 ssh_runner.go:195] Run: crio --version
	I1213 19:02:40.379851   24117 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1213 19:02:40.381328   24117 cli_runner.go:164] Run: docker network inspect addons-237678 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:02:40.397635   24117 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 19:02:40.400996   24117 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:02:40.410607   24117 kubeadm.go:883] updating cluster {Name:addons-237678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-237678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 19:02:40.410720   24117 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:02:40.410772   24117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:02:40.472930   24117 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:02:40.472956   24117 crio.go:433] Images already preloaded, skipping extraction
	I1213 19:02:40.473044   24117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:02:40.506189   24117 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:02:40.506210   24117 cache_images.go:84] Images are preloaded, skipping loading
	I1213 19:02:40.506217   24117 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1213 19:02:40.506292   24117 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-237678 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-237678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:02:40.506358   24117 ssh_runner.go:195] Run: crio config
	I1213 19:02:40.545000   24117 cni.go:84] Creating CNI manager for ""
	I1213 19:02:40.545020   24117 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:02:40.545036   24117 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 19:02:40.545058   24117 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-237678 NodeName:addons-237678 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 19:02:40.545173   24117 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-237678"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 19:02:40.545236   24117 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 19:02:40.552922   24117 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 19:02:40.552985   24117 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 19:02:40.560532   24117 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 19:02:40.576013   24117 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:02:40.591443   24117 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1213 19:02:40.606683   24117 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 19:02:40.609692   24117 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:02:40.618870   24117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:02:40.695564   24117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:02:40.707068   24117 certs.go:68] Setting up /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678 for IP: 192.168.49.2
	I1213 19:02:40.707092   24117 certs.go:194] generating shared ca certs ...
	I1213 19:02:40.707113   24117 certs.go:226] acquiring lock for ca certs: {Name:mk2fbaac84ab0753d470e1940d79f7bab81bd059 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:40.707258   24117 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20090-15903/.minikube/ca.key
	I1213 19:02:40.943570   24117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-15903/.minikube/ca.crt ...
	I1213 19:02:40.943602   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/ca.crt: {Name:mkdb34501d4529e4f582fc9651a84aaa3424c28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:40.943769   24117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-15903/.minikube/ca.key ...
	I1213 19:02:40.943779   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/ca.key: {Name:mk2e973a83de73ccad632e5b26aff21214d2bdc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:40.943850   24117 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20090-15903/.minikube/proxy-client-ca.key
	I1213 19:02:41.084013   24117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-15903/.minikube/proxy-client-ca.crt ...
	I1213 19:02:41.084040   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/proxy-client-ca.crt: {Name:mk3c2611246939751ed236e914b6e8b65b3fc451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.084205   24117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-15903/.minikube/proxy-client-ca.key ...
	I1213 19:02:41.084216   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/proxy-client-ca.key: {Name:mk7464522b7ff8a643d52f3c19186a8d46486aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.084284   24117 certs.go:256] generating profile certs ...
	I1213 19:02:41.084336   24117 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.key
	I1213 19:02:41.084350   24117 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt with IP's: []
	I1213 19:02:41.214303   24117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt ...
	I1213 19:02:41.214331   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: {Name:mk785f1592568ee3f28a7bac32c45dd7c605fa94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.214475   24117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.key ...
	I1213 19:02:41.214484   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.key: {Name:mk96a6dfd7700d17587300963698b5d2cfb8a38d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.214550   24117 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.key.52e9ce70
	I1213 19:02:41.214569   24117 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.crt.52e9ce70 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 19:02:41.336159   24117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.crt.52e9ce70 ...
	I1213 19:02:41.336189   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.crt.52e9ce70: {Name:mkb3a9df19cbc8acf913abf9a3a879b3ccb711bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.336346   24117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.key.52e9ce70 ...
	I1213 19:02:41.336360   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.key.52e9ce70: {Name:mkb33c54c0d6c298791786897d053bb1ca298d8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.336429   24117 certs.go:381] copying /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.crt.52e9ce70 -> /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.crt
	I1213 19:02:41.336498   24117 certs.go:385] copying /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.key.52e9ce70 -> /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.key
	I1213 19:02:41.336548   24117 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.key
	I1213 19:02:41.336565   24117 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.crt with IP's: []
	I1213 19:02:41.400015   24117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.crt ...
	I1213 19:02:41.400044   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.crt: {Name:mk1d7f6e55002a189386cb19a8bb439c3435565c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.400196   24117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.key ...
	I1213 19:02:41.400206   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.key: {Name:mk040a144459cc8a1de1c98c510410be1ef4314a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.400367   24117 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca-key.pem (1679 bytes)
	I1213 19:02:41.400403   24117 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca.pem (1078 bytes)
	I1213 19:02:41.400426   24117 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:02:41.400453   24117 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/key.pem (1675 bytes)
	I1213 19:02:41.401069   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:02:41.422176   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 19:02:41.442572   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:02:41.463146   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 19:02:41.483523   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 19:02:41.504555   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:02:41.525683   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:02:41.549135   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 19:02:41.570083   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:02:41.591494   24117 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 19:02:41.606545   24117 ssh_runner.go:195] Run: openssl version
	I1213 19:02:41.611344   24117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 19:02:41.619519   24117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:02:41.622595   24117 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:02 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:02:41.622638   24117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:02:41.628818   24117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 19:02:41.637212   24117 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:02:41.640237   24117 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 19:02:41.640284   24117 kubeadm.go:392] StartCluster: {Name:addons-237678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-237678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:02:41.640376   24117 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 19:02:41.640414   24117 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:02:41.672019   24117 cri.go:89] found id: ""
	I1213 19:02:41.672086   24117 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 19:02:41.679799   24117 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 19:02:41.687191   24117 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1213 19:02:41.687232   24117 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 19:02:41.694554   24117 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 19:02:41.694570   24117 kubeadm.go:157] found existing configuration files:
	
	I1213 19:02:41.694605   24117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 19:02:41.702006   24117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 19:02:41.702058   24117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 19:02:41.709216   24117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 19:02:41.716648   24117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 19:02:41.716706   24117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 19:02:41.724068   24117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 19:02:41.732969   24117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 19:02:41.733021   24117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 19:02:41.740280   24117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 19:02:41.747725   24117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 19:02:41.747780   24117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 19:02:41.755007   24117 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 19:02:41.804655   24117 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1213 19:02:41.851914   24117 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 19:02:49.774295   24117 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1213 19:02:49.774365   24117 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 19:02:49.774463   24117 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1213 19:02:49.774523   24117 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1213 19:02:49.774558   24117 kubeadm.go:310] OS: Linux
	I1213 19:02:49.774599   24117 kubeadm.go:310] CGROUPS_CPU: enabled
	I1213 19:02:49.774651   24117 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1213 19:02:49.774691   24117 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1213 19:02:49.774738   24117 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1213 19:02:49.774779   24117 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1213 19:02:49.774823   24117 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1213 19:02:49.774888   24117 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1213 19:02:49.774976   24117 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1213 19:02:49.775046   24117 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1213 19:02:49.775155   24117 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 19:02:49.775269   24117 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 19:02:49.775377   24117 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 19:02:49.775437   24117 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 19:02:49.777389   24117 out.go:235]   - Generating certificates and keys ...
	I1213 19:02:49.777475   24117 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 19:02:49.777552   24117 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 19:02:49.777614   24117 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 19:02:49.777669   24117 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1213 19:02:49.777724   24117 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1213 19:02:49.777771   24117 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1213 19:02:49.777831   24117 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1213 19:02:49.777937   24117 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-237678 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 19:02:49.778030   24117 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1213 19:02:49.778247   24117 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-237678 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 19:02:49.778318   24117 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 19:02:49.778376   24117 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 19:02:49.778416   24117 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1213 19:02:49.778466   24117 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 19:02:49.778510   24117 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 19:02:49.778557   24117 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 19:02:49.778608   24117 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 19:02:49.778664   24117 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 19:02:49.778713   24117 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 19:02:49.778783   24117 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 19:02:49.778843   24117 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 19:02:49.780499   24117 out.go:235]   - Booting up control plane ...
	I1213 19:02:49.780604   24117 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 19:02:49.780681   24117 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 19:02:49.780744   24117 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 19:02:49.780846   24117 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 19:02:49.780986   24117 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 19:02:49.781059   24117 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 19:02:49.781222   24117 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 19:02:49.781333   24117 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 19:02:49.781425   24117 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.621764ms
	I1213 19:02:49.781534   24117 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1213 19:02:49.781624   24117 kubeadm.go:310] [api-check] The API server is healthy after 4.001359014s
	I1213 19:02:49.781739   24117 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 19:02:49.781874   24117 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 19:02:49.781965   24117 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 19:02:49.782154   24117 kubeadm.go:310] [mark-control-plane] Marking the node addons-237678 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 19:02:49.782245   24117 kubeadm.go:310] [bootstrap-token] Using token: ufky5y.p8vtytenxjrrx9g5
	I1213 19:02:49.784910   24117 out.go:235]   - Configuring RBAC rules ...
	I1213 19:02:49.785025   24117 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 19:02:49.785143   24117 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 19:02:49.785322   24117 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 19:02:49.785487   24117 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 19:02:49.785621   24117 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 19:02:49.785730   24117 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 19:02:49.785866   24117 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 19:02:49.785928   24117 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1213 19:02:49.785996   24117 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1213 19:02:49.786006   24117 kubeadm.go:310] 
	I1213 19:02:49.786088   24117 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1213 19:02:49.786100   24117 kubeadm.go:310] 
	I1213 19:02:49.786163   24117 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1213 19:02:49.786169   24117 kubeadm.go:310] 
	I1213 19:02:49.786190   24117 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1213 19:02:49.786244   24117 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 19:02:49.786290   24117 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 19:02:49.786296   24117 kubeadm.go:310] 
	I1213 19:02:49.786340   24117 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1213 19:02:49.786346   24117 kubeadm.go:310] 
	I1213 19:02:49.786388   24117 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 19:02:49.786394   24117 kubeadm.go:310] 
	I1213 19:02:49.786441   24117 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1213 19:02:49.786512   24117 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 19:02:49.786599   24117 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 19:02:49.786609   24117 kubeadm.go:310] 
	I1213 19:02:49.786685   24117 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 19:02:49.786788   24117 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1213 19:02:49.786799   24117 kubeadm.go:310] 
	I1213 19:02:49.786866   24117 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ufky5y.p8vtytenxjrrx9g5 \
	I1213 19:02:49.786952   24117 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:638961caa3d3d382bee193acde3e67d6eb5a416d1c68186140e9cf3d3b49b876 \
	I1213 19:02:49.786972   24117 kubeadm.go:310] 	--control-plane 
	I1213 19:02:49.786978   24117 kubeadm.go:310] 
	I1213 19:02:49.787051   24117 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1213 19:02:49.787057   24117 kubeadm.go:310] 
	I1213 19:02:49.787123   24117 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ufky5y.p8vtytenxjrrx9g5 \
	I1213 19:02:49.787234   24117 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:638961caa3d3d382bee193acde3e67d6eb5a416d1c68186140e9cf3d3b49b876 
	I1213 19:02:49.787245   24117 cni.go:84] Creating CNI manager for ""
	I1213 19:02:49.787252   24117 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:02:49.788982   24117 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1213 19:02:49.790292   24117 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 19:02:49.793850   24117 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1213 19:02:49.793866   24117 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 19:02:49.810277   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 19:02:49.998712   24117 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 19:02:49.998761   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:49.998830   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-237678 minikube.k8s.io/updated_at=2024_12_13T19_02_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956 minikube.k8s.io/name=addons-237678 minikube.k8s.io/primary=true
	I1213 19:02:50.062630   24117 ops.go:34] apiserver oom_adj: -16
	I1213 19:02:50.062777   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:50.563107   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:51.063793   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:51.563181   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:52.063061   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:52.562909   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:53.062964   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:53.563100   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:54.063613   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:54.563852   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:54.624663   24117 kubeadm.go:1113] duration metric: took 4.625944665s to wait for elevateKubeSystemPrivileges
	I1213 19:02:54.624714   24117 kubeadm.go:394] duration metric: took 12.984432698s to StartCluster
	I1213 19:02:54.624738   24117 settings.go:142] acquiring lock: {Name:mk1d582ab037339c5185379bff3c01140f06f006 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:54.624874   24117 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-15903/kubeconfig
	I1213 19:02:54.625413   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/kubeconfig: {Name:mka9db62e71382b1e468379ab2f4120f5c10e65e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:54.625628   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 19:02:54.625656   24117 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:02:54.625714   24117 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 19:02:54.625836   24117 addons.go:69] Setting yakd=true in profile "addons-237678"
	I1213 19:02:54.625850   24117 addons.go:69] Setting cloud-spanner=true in profile "addons-237678"
	I1213 19:02:54.625869   24117 addons.go:234] Setting addon cloud-spanner=true in "addons-237678"
	I1213 19:02:54.625872   24117 addons.go:69] Setting metrics-server=true in profile "addons-237678"
	I1213 19:02:54.625877   24117 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-237678"
	I1213 19:02:54.625898   24117 addons.go:234] Setting addon metrics-server=true in "addons-237678"
	I1213 19:02:54.625901   24117 addons.go:69] Setting default-storageclass=true in profile "addons-237678"
	I1213 19:02:54.625905   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.625904   24117 config.go:182] Loaded profile config "addons-237678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:02:54.625916   24117 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-237678"
	I1213 19:02:54.625921   24117 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-237678"
	I1213 19:02:54.625930   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.625948   24117 addons.go:69] Setting ingress-dns=true in profile "addons-237678"
	I1213 19:02:54.625958   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.625964   24117 addons.go:234] Setting addon ingress-dns=true in "addons-237678"
	I1213 19:02:54.625995   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.626041   24117 addons.go:69] Setting storage-provisioner=true in profile "addons-237678"
	I1213 19:02:54.626063   24117 addons.go:234] Setting addon storage-provisioner=true in "addons-237678"
	I1213 19:02:54.626091   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.626272   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626436   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626451   24117 addons.go:69] Setting inspektor-gadget=true in profile "addons-237678"
	I1213 19:02:54.626455   24117 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-237678"
	I1213 19:02:54.626462   24117 addons.go:69] Setting volcano=true in profile "addons-237678"
	I1213 19:02:54.626465   24117 addons.go:234] Setting addon inspektor-gadget=true in "addons-237678"
	I1213 19:02:54.626451   24117 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-237678"
	I1213 19:02:54.626479   24117 addons.go:234] Setting addon volcano=true in "addons-237678"
	I1213 19:02:54.626492   24117 addons.go:69] Setting registry=true in profile "addons-237678"
	I1213 19:02:54.626496   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.626503   24117 addons.go:234] Setting addon registry=true in "addons-237678"
	I1213 19:02:54.626515   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.626522   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626528   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.626546   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626439   24117 addons.go:234] Setting addon yakd=true in "addons-237678"
	I1213 19:02:54.626953   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626958   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.627126   24117 addons.go:69] Setting gcp-auth=true in profile "addons-237678"
	I1213 19:02:54.627129   24117 addons.go:69] Setting volumesnapshots=true in profile "addons-237678"
	I1213 19:02:54.627143   24117 addons.go:234] Setting addon volumesnapshots=true in "addons-237678"
	I1213 19:02:54.627146   24117 mustload.go:65] Loading cluster: addons-237678
	I1213 19:02:54.627169   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.627327   24117 config.go:182] Loaded profile config "addons-237678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:02:54.627450   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.627633   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.627706   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626467   24117 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-237678"
	I1213 19:02:54.627926   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.628457   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.625838   24117 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-237678"
	I1213 19:02:54.628791   24117 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-237678"
	I1213 19:02:54.628825   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.626441   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.631630   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.632350   24117 out.go:177] * Verifying Kubernetes components...
	I1213 19:02:54.625881   24117 addons.go:69] Setting ingress=true in profile "addons-237678"
	I1213 19:02:54.632765   24117 addons.go:234] Setting addon ingress=true in "addons-237678"
	I1213 19:02:54.626440   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626924   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626481   24117 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-237678"
	I1213 19:02:54.632832   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.634332   24117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:02:54.655702   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.655979   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.656911   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.662674   24117 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 19:02:54.662741   24117 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1213 19:02:54.662865   24117 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1213 19:02:54.664384   24117 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 19:02:54.664405   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 19:02:54.664453   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.665058   24117 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 19:02:54.665102   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1213 19:02:54.665156   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.665702   24117 out.go:177]   - Using image docker.io/registry:2.8.3
	I1213 19:02:54.667186   24117 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1213 19:02:54.667341   24117 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 19:02:54.667357   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 19:02:54.667404   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.668402   24117 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 19:02:54.668418   24117 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 19:02:54.668461   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.672445   24117 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 19:02:54.676045   24117 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 19:02:54.676077   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 19:02:54.676136   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.689294   24117 addons.go:234] Setting addon default-storageclass=true in "addons-237678"
	I1213 19:02:54.689349   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.689847   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	W1213 19:02:54.723560   24117 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 19:02:54.726534   24117 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1213 19:02:54.726747   24117 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:02:54.726794   24117 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1213 19:02:54.728115   24117 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1213 19:02:54.728276   24117 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1213 19:02:54.728394   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.729426   24117 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 19:02:54.729444   24117 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 19:02:54.729495   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.730505   24117 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1213 19:02:54.731965   24117 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:02:54.734263   24117 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 19:02:54.734283   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 19:02:54.734403   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.734594   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 19:02:54.735958   24117 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1213 19:02:54.737492   24117 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1213 19:02:54.737512   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 19:02:54.737566   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.737704   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 19:02:54.738379   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.740034   24117 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1213 19:02:54.740096   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 19:02:54.741732   24117 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 19:02:54.741745   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 19:02:54.741798   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.743477   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 19:02:54.744737   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 19:02:54.746011   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 19:02:54.747172   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 19:02:54.747228   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.748321   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 19:02:54.749561   24117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 19:02:54.749580   24117 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 19:02:54.749577   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.749641   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.749888   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 19:02:54.751372   24117 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 19:02:54.751394   24117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 19:02:54.751459   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.761103   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.777231   24117 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 19:02:54.777262   24117 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 19:02:54.777320   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.784555   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.785112   24117 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-237678"
	I1213 19:02:54.785156   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.785360   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.785631   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.786636   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.791827   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.793202   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.799473   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.799743   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.803815   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.804301   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.804311   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.813079   24117 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 19:02:54.814387   24117 out.go:177]   - Using image docker.io/busybox:stable
	I1213 19:02:54.815849   24117 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 19:02:54.815870   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 19:02:54.815927   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	W1213 19:02:54.827607   24117 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 19:02:54.827641   24117 retry.go:31] will retry after 304.050863ms: ssh: handshake failed: EOF
	I1213 19:02:54.828188   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 19:02:54.849242   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.911169   24117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:02:55.124459   24117 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 19:02:55.124489   24117 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 19:02:55.129036   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 19:02:55.220877   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 19:02:55.232565   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 19:02:55.313229   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 19:02:55.408044   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 19:02:55.412248   24117 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 19:02:55.412273   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 19:02:55.413983   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 19:02:55.416660   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 19:02:55.418208   24117 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 19:02:55.418229   24117 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 19:02:55.418441   24117 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 19:02:55.418460   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 19:02:55.432076   24117 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 19:02:55.432104   24117 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 19:02:55.508734   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 19:02:55.509865   24117 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 19:02:55.509887   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1213 19:02:55.620174   24117 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 19:02:55.620281   24117 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 19:02:55.709414   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 19:02:55.710261   24117 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 19:02:55.710322   24117 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 19:02:55.714824   24117 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 19:02:55.714906   24117 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 19:02:55.717957   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 19:02:55.730468   24117 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 19:02:55.730561   24117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 19:02:55.933458   24117 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 19:02:55.933487   24117 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 19:02:56.121572   24117 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 19:02:56.121649   24117 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 19:02:56.208878   24117 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 19:02:56.208975   24117 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 19:02:56.313233   24117 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 19:02:56.313310   24117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 19:02:56.420740   24117 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.592523299s)
	I1213 19:02:56.420862   24117 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 19:02:56.420839   24117 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.509644119s)
	I1213 19:02:56.422243   24117 node_ready.go:35] waiting up to 6m0s for node "addons-237678" to be "Ready" ...
	I1213 19:02:56.423778   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 19:02:56.424211   24117 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 19:02:56.424230   24117 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 19:02:56.431880   24117 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 19:02:56.431920   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 19:02:56.615396   24117 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 19:02:56.615495   24117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 19:02:56.624224   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.495149859s)
	I1213 19:02:56.624356   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.403451703s)
	I1213 19:02:56.711425   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 19:02:56.719687   24117 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:02:56.719714   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 19:02:56.922938   24117 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 19:02:56.922961   24117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 19:02:57.111034   24117 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-237678" context rescaled to 1 replicas
	I1213 19:02:57.212048   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:02:57.229424   24117 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 19:02:57.229514   24117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 19:02:57.620725   24117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 19:02:57.620760   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 19:02:57.826700   24117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 19:02:57.826781   24117 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 19:02:58.120264   24117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 19:02:58.120296   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 19:02:58.228486   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.995881659s)
	I1213 19:02:58.326059   24117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 19:02:58.326142   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 19:02:58.515683   24117 node_ready.go:53] node "addons-237678" has status "Ready":"False"
	I1213 19:02:58.529766   24117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 19:02:58.529848   24117 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 19:02:58.715432   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 19:02:58.814760   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.501445103s)
	I1213 19:02:58.814841   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.406676286s)
	I1213 19:03:00.617426   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.203408375s)
	I1213 19:03:00.617468   24117 addons.go:475] Verifying addon ingress=true in "addons-237678"
	I1213 19:03:00.617690   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.201003303s)
	I1213 19:03:00.617797   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.10898039s)
	I1213 19:03:00.617873   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.908372769s)
	I1213 19:03:00.617890   24117 addons.go:475] Verifying addon registry=true in "addons-237678"
	I1213 19:03:00.618066   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.900047215s)
	I1213 19:03:00.618173   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.194370756s)
	I1213 19:03:00.618188   24117 addons.go:475] Verifying addon metrics-server=true in "addons-237678"
	I1213 19:03:00.618233   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.906724049s)
	I1213 19:03:00.619624   24117 out.go:177] * Verifying ingress addon...
	I1213 19:03:00.619634   24117 out.go:177] * Verifying registry addon...
	I1213 19:03:00.622051   24117 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-237678 service yakd-dashboard -n yakd-dashboard
	
	I1213 19:03:00.623957   24117 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 19:03:00.624575   24117 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 19:03:00.628536   24117 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 19:03:00.628584   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:00.628831   24117 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 19:03:00.628853   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:00.925968   24117 node_ready.go:53] node "addons-237678" has status "Ready":"False"
	I1213 19:03:01.131665   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:01.131959   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:01.236935   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.024770164s)
	W1213 19:03:01.236988   24117 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 19:03:01.237010   24117 retry.go:31] will retry after 231.591018ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 19:03:01.469055   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:03:01.630252   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:01.630770   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:02.012738   24117 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 19:03:02.012812   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:03:02.033165   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:03:02.132656   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:02.133025   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:02.238169   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.522669523s)
	I1213 19:03:02.238261   24117 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-237678"
	I1213 19:03:02.240648   24117 out.go:177] * Verifying csi-hostpath-driver addon...
	I1213 19:03:02.242831   24117 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 19:03:02.309660   24117 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 19:03:02.309689   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:02.325958   24117 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 19:03:02.343109   24117 addons.go:234] Setting addon gcp-auth=true in "addons-237678"
	I1213 19:03:02.343170   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:03:02.343565   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:03:02.361491   24117 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 19:03:02.361549   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:03:02.382361   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:03:02.627438   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:02.628138   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:02.746109   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:03.127015   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:03.127524   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:03.245416   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:03.426094   24117 node_ready.go:53] node "addons-237678" has status "Ready":"False"
	I1213 19:03:03.627955   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:03.628368   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:03.746645   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:04.127761   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:04.128244   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:04.246313   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:04.539913   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.070789778s)
	I1213 19:03:04.539982   24117 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.17845697s)
	I1213 19:03:04.541859   24117 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:03:04.543383   24117 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 19:03:04.544761   24117 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 19:03:04.544775   24117 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 19:03:04.561871   24117 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 19:03:04.561898   24117 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 19:03:04.577613   24117 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 19:03:04.577635   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 19:03:04.593123   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 19:03:04.627683   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:04.628091   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:04.748696   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:04.921725   24117 addons.go:475] Verifying addon gcp-auth=true in "addons-237678"
	I1213 19:03:04.923146   24117 out.go:177] * Verifying gcp-auth addon...
	I1213 19:03:04.925822   24117 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 19:03:04.927872   24117 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 19:03:04.927896   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:05.127391   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:05.127731   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:05.245946   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:05.428411   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:05.627054   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:05.627536   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:05.745593   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:05.924842   24117 node_ready.go:53] node "addons-237678" has status "Ready":"False"
	I1213 19:03:05.928998   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:06.127582   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:06.128033   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:06.246115   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:06.428378   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:06.627006   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:06.627627   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:06.745717   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:06.928960   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:07.127683   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:07.128177   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:07.246190   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:07.429001   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:07.628199   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:07.628461   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:07.745730   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:07.925037   24117 node_ready.go:53] node "addons-237678" has status "Ready":"False"
	I1213 19:03:07.929411   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:08.127030   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:08.127704   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:08.246435   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:08.428483   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:08.626873   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:08.627356   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:08.746545   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:08.929147   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:09.127761   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:09.128461   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:09.246257   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:09.428302   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:09.627765   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:09.628099   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:09.746290   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:09.925792   24117 node_ready.go:53] node "addons-237678" has status "Ready":"False"
	I1213 19:03:09.928590   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:10.127147   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:10.127585   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:10.245731   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:10.427928   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:10.627587   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:10.627853   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:10.746120   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:10.928272   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:11.127133   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:11.127539   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:11.245768   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:11.429053   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:11.627537   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:11.627982   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:11.746409   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:11.925898   24117 node_ready.go:53] node "addons-237678" has status "Ready":"False"
	I1213 19:03:11.928000   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:12.127752   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:12.128065   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:12.246120   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:12.428616   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:12.627151   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:12.627706   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:12.745518   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:12.929105   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:13.127025   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:13.127204   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:13.245659   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:13.430413   24117 node_ready.go:49] node "addons-237678" has status "Ready":"True"
	I1213 19:03:13.430441   24117 node_ready.go:38] duration metric: took 17.008124974s for node "addons-237678" to be "Ready" ...
	I1213 19:03:13.430452   24117 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 19:03:13.431334   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:13.515259   24117 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:13.711001   24117 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 19:03:13.711031   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:13.711600   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:13.748405   24117 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 19:03:13.748438   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:13.932613   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:14.131589   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:14.231597   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:14.331889   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:14.431526   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:14.627937   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:14.628234   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:14.747718   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:14.928607   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:15.128639   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:15.129392   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:15.247572   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:15.429467   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:15.521456   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:15.627199   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:15.627811   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:15.746824   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:15.929078   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:16.128027   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:16.128050   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:16.247835   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:16.429803   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:16.627504   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:16.627907   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:16.747561   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:16.929739   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:17.127355   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:17.127680   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:17.247644   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:17.429868   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:17.628097   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:17.628668   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:17.747865   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:17.929278   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:18.021302   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:18.127929   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:18.127938   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:18.248071   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:18.429081   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:18.628524   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:18.628904   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:18.746879   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:18.929562   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:19.127399   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:19.127717   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:19.246859   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:19.428746   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:19.627859   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:19.628331   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:19.747307   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:19.929888   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:20.129189   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:20.129590   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:20.246517   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:20.429012   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:20.520557   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:20.627561   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:20.627893   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:20.746703   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:20.928877   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:21.127651   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:21.127678   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:21.247492   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:21.430457   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:21.628505   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:21.628712   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:21.747022   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:21.929698   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:22.127995   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:22.128557   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:22.246696   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:22.429154   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:22.627558   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:22.628084   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:22.747121   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:22.929324   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:23.021489   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:23.127840   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:23.128048   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:23.247213   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:23.429286   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:23.628249   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:23.628390   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:23.747829   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:23.928384   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:24.127715   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:24.128075   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:24.247758   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:24.429773   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:24.627786   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:24.628161   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:24.746979   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:24.929818   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:25.127811   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:25.128158   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:25.247213   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:25.429471   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:25.520873   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:25.629560   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:25.629867   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:25.747183   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:25.929360   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:26.128043   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:26.128353   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:26.247711   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:26.429461   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:26.628064   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:26.628295   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:26.747086   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:26.945189   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:27.127574   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:27.127713   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:27.246403   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:27.429390   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:27.627551   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:27.627835   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:27.746938   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:27.928887   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:28.020230   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:28.128915   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:28.129355   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:28.247026   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:28.429414   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:28.627685   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:28.628162   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:28.746693   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:28.928642   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:29.127509   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:29.127832   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:29.246702   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:29.428564   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:29.628349   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:29.628576   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:29.747565   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:29.928764   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:30.021299   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:30.127963   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:30.128044   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:30.247641   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:30.430417   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:30.627651   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:30.628033   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:30.747556   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:30.929251   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:31.128515   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:31.129958   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:31.312180   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:31.429434   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:31.631201   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:31.632828   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:31.813385   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:31.930189   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:32.032540   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:32.137529   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:32.138676   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:32.311715   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:32.429274   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:32.629583   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:32.630280   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:32.748625   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:32.928904   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:33.128046   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:33.128460   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:33.247814   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:33.429161   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:33.628319   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:33.628504   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:33.747896   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:33.929417   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:34.127642   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:34.127941   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:34.246806   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:34.429029   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:34.521491   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:34.628116   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:34.628415   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:34.747859   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:34.929002   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:35.128024   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:35.128294   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:35.247791   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:35.429382   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:35.627823   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:35.628025   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:35.748944   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:35.928862   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:36.127708   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:36.128270   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:36.247495   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:36.428903   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:36.522368   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:36.627990   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:36.628089   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:36.747892   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:36.929566   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:37.128221   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:37.128823   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:37.246873   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:37.428672   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:37.627728   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:37.628001   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:37.746861   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:37.929112   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:38.128479   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:38.128911   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:38.247050   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:38.429151   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:38.627720   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:38.628056   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:38.746845   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:38.928878   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:39.020073   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:39.128009   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:39.128379   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:39.247664   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:39.429548   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:39.627731   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:39.627856   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:39.746980   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:39.929387   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:40.127830   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:40.128125   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:40.247145   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:40.429023   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:40.628263   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:40.628460   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:40.747311   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:40.929500   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:41.020925   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:41.128074   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:41.128342   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:41.247607   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:41.428804   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:41.628236   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:41.628614   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:41.747619   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:41.929092   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:42.127980   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:42.128130   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:42.247169   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:42.428982   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:42.630184   24117 kapi.go:107] duration metric: took 42.005607832s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 19:03:42.630265   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:42.747898   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:42.929700   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:43.127876   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:43.246698   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:43.428904   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:43.520351   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:43.628125   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:43.746988   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:43.929556   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:44.128088   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:44.246956   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:44.428749   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:44.628318   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:44.746530   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:44.929078   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:45.127840   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:45.246863   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:45.428948   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:45.521050   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:45.628716   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:45.747783   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:45.928977   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:46.127476   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:46.247732   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:46.429555   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:46.627815   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:46.747530   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:46.929069   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:47.128886   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:47.246479   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:47.430708   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:47.627668   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:47.746772   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:47.929434   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:48.021002   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:48.128235   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:48.247311   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:48.430572   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:48.628666   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:48.747742   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:48.928800   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:49.128174   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:49.246689   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:49.429833   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:49.628244   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:49.746827   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:49.929281   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:50.021143   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:50.128503   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:50.248343   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:50.428980   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:50.628323   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:50.748221   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:50.928944   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:51.128125   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:51.247085   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:51.428598   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:51.627680   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:51.746834   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:51.929632   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:52.127382   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:52.247224   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:52.429161   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:52.520997   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:52.628728   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:52.746648   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:52.928620   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:53.127630   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:53.246443   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:53.429524   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:53.628282   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:53.811612   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:53.928691   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:54.128269   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:54.247244   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:54.429541   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:54.521228   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:54.629055   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:54.747241   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:54.929917   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:55.127342   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:55.247916   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:55.430204   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:55.628160   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:55.747698   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:55.929939   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:56.127678   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:56.247063   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:56.428975   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:56.521415   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:56.628273   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:56.748052   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:56.929926   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:57.128873   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:57.248422   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:57.429328   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:57.628385   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:57.748787   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:57.928852   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:58.128116   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:58.246565   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:58.429768   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:58.628348   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:58.747373   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:58.930009   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:59.020693   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:59.128400   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:59.247361   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:59.429076   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:59.627469   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:59.747264   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:59.929548   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:00.127778   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:00.248030   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:00.429735   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:00.628677   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:00.748617   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:00.929619   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:01.022151   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:01.129003   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:01.248050   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:01.429693   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:01.627759   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:01.747471   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:01.929128   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:02.127997   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:02.246806   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:02.428812   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:02.629424   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:02.746958   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:02.928943   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:03.127837   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:03.246340   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:03.429590   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:03.521081   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:03.629119   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:03.747864   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:03.928721   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:04.127964   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:04.246904   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:04.429074   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:04.628191   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:04.747444   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:04.928646   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:05.127193   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:05.247122   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:05.429084   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:05.628033   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:05.747425   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:05.929696   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:06.021325   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:06.127505   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:06.247854   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:06.428548   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:06.628046   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:06.746956   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:06.929343   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:07.127225   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:07.247542   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:07.429290   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:07.628442   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:07.747242   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:07.929603   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:08.127493   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:08.247462   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:08.429641   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:08.520095   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:08.694434   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:08.795495   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:08.929743   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:09.154569   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:09.247149   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:09.428953   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:09.628707   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:09.748198   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:09.929598   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:10.128280   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:10.247191   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:10.428945   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:10.520407   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:10.627840   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:10.746805   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:10.928704   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:11.020172   24117 pod_ready.go:93] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:11.020192   24117 pod_ready.go:82] duration metric: took 57.504892803s for pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.020202   24117 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vdvvc" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.024180   24117 pod_ready.go:93] pod "coredns-7c65d6cfc9-vdvvc" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:11.024199   24117 pod_ready.go:82] duration metric: took 3.990866ms for pod "coredns-7c65d6cfc9-vdvvc" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.024214   24117 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.028953   24117 pod_ready.go:93] pod "etcd-addons-237678" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:11.028988   24117 pod_ready.go:82] duration metric: took 4.768115ms for pod "etcd-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.029001   24117 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.032950   24117 pod_ready.go:93] pod "kube-apiserver-addons-237678" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:11.032967   24117 pod_ready.go:82] duration metric: took 3.959136ms for pod "kube-apiserver-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.032975   24117 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.036815   24117 pod_ready.go:93] pod "kube-controller-manager-addons-237678" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:11.036832   24117 pod_ready.go:82] duration metric: took 3.85051ms for pod "kube-controller-manager-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.036846   24117 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8xhqt" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.128335   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:11.247662   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:11.418368   24117 pod_ready.go:93] pod "kube-proxy-8xhqt" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:11.418388   24117 pod_ready.go:82] duration metric: took 381.535082ms for pod "kube-proxy-8xhqt" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.418398   24117 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.428857   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:11.628117   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:11.746910   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:11.819217   24117 pod_ready.go:93] pod "kube-scheduler-addons-237678" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:11.819244   24117 pod_ready.go:82] duration metric: took 400.838452ms for pod "kube-scheduler-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.819258   24117 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.928871   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:12.128084   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:12.247594   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:12.429097   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:12.627673   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:12.748387   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:12.929711   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:13.128100   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:13.247620   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:13.429563   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:13.628120   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:13.746945   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:13.825861   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:13.931204   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:14.128724   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:14.246740   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:14.429461   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:14.628331   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:14.747206   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:14.929242   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:15.128252   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:15.248689   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:15.428728   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:15.627719   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:15.747111   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:15.929056   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:16.128632   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:16.247450   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:16.324923   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:16.429655   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:16.628275   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:16.748338   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:16.929432   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:17.127967   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:17.247020   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:17.429989   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:17.629141   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:17.747103   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:17.929783   24117 kapi.go:107] duration metric: took 1m13.003957087s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 19:04:17.932040   24117 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-237678 cluster.
	I1213 19:04:17.933660   24117 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 19:04:17.935105   24117 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 19:04:18.129463   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:18.312289   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:18.325382   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:18.628918   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:18.810867   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:19.128680   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:19.312530   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:19.628175   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:19.747698   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:20.128002   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:20.246959   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:20.628331   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:20.748888   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:20.825081   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:21.128812   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:21.247802   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:21.628297   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:21.747331   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:22.127798   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:22.247056   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:22.628833   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:22.747759   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:23.128551   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:23.246967   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:23.325975   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:23.628263   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:23.748333   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:24.128995   24117 kapi.go:107] duration metric: took 1m23.505038003s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 19:04:24.246532   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:24.747788   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:25.247023   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:25.747855   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:25.825214   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:26.247642   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:26.746539   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:27.248313   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:27.746622   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:28.247247   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:28.324959   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:28.747521   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:29.247485   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:29.746895   24117 kapi.go:107] duration metric: took 1m27.504065503s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 19:04:29.748630   24117 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, ingress-dns, storage-provisioner, default-storageclass, cloud-spanner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1213 19:04:29.749754   24117 addons.go:510] duration metric: took 1m35.124044409s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin ingress-dns storage-provisioner default-storageclass cloud-spanner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1213 19:04:30.824491   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:33.326340   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:35.825043   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:38.324708   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:40.325107   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:42.824137   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:44.824643   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:46.896863   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:49.325012   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:51.825193   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:54.325049   24117 pod_ready.go:93] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:54.325071   24117 pod_ready.go:82] duration metric: took 42.505804813s for pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:54.325082   24117 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5ppp7" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:54.329474   24117 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-5ppp7" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:54.329494   24117 pod_ready.go:82] duration metric: took 4.404442ms for pod "nvidia-device-plugin-daemonset-5ppp7" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:54.329510   24117 pod_ready.go:39] duration metric: took 1m40.899045115s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 19:04:54.329527   24117 api_server.go:52] waiting for apiserver process to appear ...
	I1213 19:04:54.329557   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:04:54.329608   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:04:54.362338   24117 cri.go:89] found id: "768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323"
	I1213 19:04:54.362362   24117 cri.go:89] found id: ""
	I1213 19:04:54.362370   24117 logs.go:282] 1 containers: [768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323]
	I1213 19:04:54.362423   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:54.365703   24117 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:04:54.365772   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:04:54.399240   24117 cri.go:89] found id: "2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06"
	I1213 19:04:54.399267   24117 cri.go:89] found id: ""
	I1213 19:04:54.399297   24117 logs.go:282] 1 containers: [2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06]
	I1213 19:04:54.399352   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:54.402738   24117 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:04:54.402794   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:04:54.437000   24117 cri.go:89] found id: "c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c"
	I1213 19:04:54.437028   24117 cri.go:89] found id: ""
	I1213 19:04:54.437038   24117 logs.go:282] 1 containers: [c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c]
	I1213 19:04:54.437080   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:54.440562   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:04:54.440619   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:04:54.473542   24117 cri.go:89] found id: "b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d"
	I1213 19:04:54.473568   24117 cri.go:89] found id: ""
	I1213 19:04:54.473586   24117 logs.go:282] 1 containers: [b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d]
	I1213 19:04:54.473643   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:54.476994   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:04:54.477050   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:04:54.510230   24117 cri.go:89] found id: "9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65"
	I1213 19:04:54.510256   24117 cri.go:89] found id: ""
	I1213 19:04:54.510264   24117 logs.go:282] 1 containers: [9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65]
	I1213 19:04:54.510321   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:54.513487   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:04:54.513557   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:04:54.546681   24117 cri.go:89] found id: "96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986"
	I1213 19:04:54.546702   24117 cri.go:89] found id: ""
	I1213 19:04:54.546709   24117 logs.go:282] 1 containers: [96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986]
	I1213 19:04:54.546764   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:54.550149   24117 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:04:54.550198   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:04:54.582976   24117 cri.go:89] found id: "d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5"
	I1213 19:04:54.583003   24117 cri.go:89] found id: ""
	I1213 19:04:54.583017   24117 logs.go:282] 1 containers: [d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5]
	I1213 19:04:54.583059   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:54.586398   24117 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:04:54.586426   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:04:54.657463   24117 logs.go:123] Gathering logs for kubelet ...
	I1213 19:04:54.657497   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:04:54.737754   24117 logs.go:123] Gathering logs for kube-apiserver [768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323] ...
	I1213 19:04:54.737789   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323"
	I1213 19:04:54.781402   24117 logs.go:123] Gathering logs for etcd [2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06] ...
	I1213 19:04:54.781435   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06"
	I1213 19:04:54.827165   24117 logs.go:123] Gathering logs for kube-scheduler [b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d] ...
	I1213 19:04:54.827203   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d"
	I1213 19:04:54.865320   24117 logs.go:123] Gathering logs for kube-controller-manager [96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986] ...
	I1213 19:04:54.865348   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986"
	I1213 19:04:54.919064   24117 logs.go:123] Gathering logs for container status ...
	I1213 19:04:54.919105   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:04:54.961693   24117 logs.go:123] Gathering logs for dmesg ...
	I1213 19:04:54.961722   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:04:54.973761   24117 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:04:54.973790   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 19:04:55.070984   24117 logs.go:123] Gathering logs for coredns [c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c] ...
	I1213 19:04:55.071020   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c"
	I1213 19:04:55.123537   24117 logs.go:123] Gathering logs for kube-proxy [9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65] ...
	I1213 19:04:55.123579   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65"
	I1213 19:04:55.158283   24117 logs.go:123] Gathering logs for kindnet [d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5] ...
	I1213 19:04:55.158307   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5"
	I1213 19:04:57.691171   24117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:04:57.705266   24117 api_server.go:72] duration metric: took 2m3.07957193s to wait for apiserver process to appear ...
	I1213 19:04:57.705292   24117 api_server.go:88] waiting for apiserver healthz status ...
	I1213 19:04:57.705351   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:04:57.705406   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:04:57.738998   24117 cri.go:89] found id: "768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323"
	I1213 19:04:57.739019   24117 cri.go:89] found id: ""
	I1213 19:04:57.739027   24117 logs.go:282] 1 containers: [768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323]
	I1213 19:04:57.739074   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:57.742424   24117 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:04:57.742494   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:04:57.783804   24117 cri.go:89] found id: "2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06"
	I1213 19:04:57.783830   24117 cri.go:89] found id: ""
	I1213 19:04:57.783839   24117 logs.go:282] 1 containers: [2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06]
	I1213 19:04:57.783894   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:57.808003   24117 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:04:57.808080   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:04:57.843793   24117 cri.go:89] found id: "c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c"
	I1213 19:04:57.843818   24117 cri.go:89] found id: ""
	I1213 19:04:57.843827   24117 logs.go:282] 1 containers: [c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c]
	I1213 19:04:57.843867   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:57.847190   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:04:57.847246   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:04:57.881332   24117 cri.go:89] found id: "b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d"
	I1213 19:04:57.881362   24117 cri.go:89] found id: ""
	I1213 19:04:57.881372   24117 logs.go:282] 1 containers: [b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d]
	I1213 19:04:57.881418   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:57.885381   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:04:57.885448   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:04:57.921094   24117 cri.go:89] found id: "9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65"
	I1213 19:04:57.921120   24117 cri.go:89] found id: ""
	I1213 19:04:57.921130   24117 logs.go:282] 1 containers: [9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65]
	I1213 19:04:57.921183   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:57.924692   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:04:57.924760   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:04:57.956925   24117 cri.go:89] found id: "96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986"
	I1213 19:04:57.956949   24117 cri.go:89] found id: ""
	I1213 19:04:57.956956   24117 logs.go:282] 1 containers: [96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986]
	I1213 19:04:57.957004   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:57.960227   24117 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:04:57.960276   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:04:57.993260   24117 cri.go:89] found id: "d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5"
	I1213 19:04:57.993281   24117 cri.go:89] found id: ""
	I1213 19:04:57.993288   24117 logs.go:282] 1 containers: [d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5]
	I1213 19:04:57.993333   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:57.996560   24117 logs.go:123] Gathering logs for kubelet ...
	I1213 19:04:57.996581   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:04:58.084333   24117 logs.go:123] Gathering logs for kube-apiserver [768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323] ...
	I1213 19:04:58.084371   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323"
	I1213 19:04:58.130474   24117 logs.go:123] Gathering logs for etcd [2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06] ...
	I1213 19:04:58.130505   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06"
	I1213 19:04:58.176247   24117 logs.go:123] Gathering logs for kube-proxy [9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65] ...
	I1213 19:04:58.176281   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65"
	I1213 19:04:58.209200   24117 logs.go:123] Gathering logs for kindnet [d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5] ...
	I1213 19:04:58.209238   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5"
	I1213 19:04:58.242860   24117 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:04:58.242886   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:04:58.322594   24117 logs.go:123] Gathering logs for dmesg ...
	I1213 19:04:58.322631   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:04:58.334843   24117 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:04:58.334874   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 19:04:58.435698   24117 logs.go:123] Gathering logs for coredns [c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c] ...
	I1213 19:04:58.435724   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c"
	I1213 19:04:58.488174   24117 logs.go:123] Gathering logs for kube-scheduler [b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d] ...
	I1213 19:04:58.488206   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d"
	I1213 19:04:58.527511   24117 logs.go:123] Gathering logs for kube-controller-manager [96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986] ...
	I1213 19:04:58.527540   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986"
	I1213 19:04:58.581254   24117 logs.go:123] Gathering logs for container status ...
	I1213 19:04:58.581290   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:05:01.123166   24117 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 19:05:01.126725   24117 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 19:05:01.127678   24117 api_server.go:141] control plane version: v1.31.2
	I1213 19:05:01.127700   24117 api_server.go:131] duration metric: took 3.422401118s to wait for apiserver health ...
	I1213 19:05:01.127708   24117 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 19:05:01.127727   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:05:01.127777   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:05:01.161081   24117 cri.go:89] found id: "768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323"
	I1213 19:05:01.161100   24117 cri.go:89] found id: ""
	I1213 19:05:01.161107   24117 logs.go:282] 1 containers: [768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323]
	I1213 19:05:01.161146   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:05:01.164604   24117 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:05:01.164676   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:05:01.198691   24117 cri.go:89] found id: "2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06"
	I1213 19:05:01.198715   24117 cri.go:89] found id: ""
	I1213 19:05:01.198722   24117 logs.go:282] 1 containers: [2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06]
	I1213 19:05:01.198764   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:05:01.201972   24117 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:05:01.202041   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:05:01.236153   24117 cri.go:89] found id: "c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c"
	I1213 19:05:01.236175   24117 cri.go:89] found id: ""
	I1213 19:05:01.236183   24117 logs.go:282] 1 containers: [c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c]
	I1213 19:05:01.236237   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:05:01.240173   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:05:01.240246   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:05:01.273916   24117 cri.go:89] found id: "b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d"
	I1213 19:05:01.273939   24117 cri.go:89] found id: ""
	I1213 19:05:01.273946   24117 logs.go:282] 1 containers: [b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d]
	I1213 19:05:01.274001   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:05:01.277359   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:05:01.277416   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:05:01.309576   24117 cri.go:89] found id: "9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65"
	I1213 19:05:01.309602   24117 cri.go:89] found id: ""
	I1213 19:05:01.309610   24117 logs.go:282] 1 containers: [9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65]
	I1213 19:05:01.309652   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:05:01.312874   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:05:01.312938   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:05:01.345780   24117 cri.go:89] found id: "96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986"
	I1213 19:05:01.345798   24117 cri.go:89] found id: ""
	I1213 19:05:01.345806   24117 logs.go:282] 1 containers: [96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986]
	I1213 19:05:01.345845   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:05:01.349017   24117 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:05:01.349089   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:05:01.384522   24117 cri.go:89] found id: "d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5"
	I1213 19:05:01.384543   24117 cri.go:89] found id: ""
	I1213 19:05:01.384551   24117 logs.go:282] 1 containers: [d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5]
	I1213 19:05:01.384591   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:05:01.387790   24117 logs.go:123] Gathering logs for kube-apiserver [768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323] ...
	I1213 19:05:01.387816   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323"
	I1213 19:05:01.433166   24117 logs.go:123] Gathering logs for etcd [2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06] ...
	I1213 19:05:01.433196   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06"
	I1213 19:05:01.477176   24117 logs.go:123] Gathering logs for kube-scheduler [b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d] ...
	I1213 19:05:01.477208   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d"
	I1213 19:05:01.515750   24117 logs.go:123] Gathering logs for kube-proxy [9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65] ...
	I1213 19:05:01.515780   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65"
	I1213 19:05:01.547760   24117 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:05:01.547785   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:05:01.623891   24117 logs.go:123] Gathering logs for container status ...
	I1213 19:05:01.623930   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:05:01.664416   24117 logs.go:123] Gathering logs for dmesg ...
	I1213 19:05:01.664455   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:05:01.677012   24117 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:05:01.677041   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 19:05:01.773315   24117 logs.go:123] Gathering logs for kube-controller-manager [96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986] ...
	I1213 19:05:01.773347   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986"
	I1213 19:05:01.829244   24117 logs.go:123] Gathering logs for kindnet [d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5] ...
	I1213 19:05:01.829284   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5"
	I1213 19:05:01.863117   24117 logs.go:123] Gathering logs for kubelet ...
	I1213 19:05:01.863153   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:05:01.946639   24117 logs.go:123] Gathering logs for coredns [c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c] ...
	I1213 19:05:01.946676   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c"
	I1213 19:05:04.510780   24117 system_pods.go:59] 19 kube-system pods found
	I1213 19:05:04.510828   24117 system_pods.go:61] "amd-gpu-device-plugin-bl7z9" [53b1759f-8dcc-4454-ba3e-6feaf74540e7] Running
	I1213 19:05:04.510835   24117 system_pods.go:61] "coredns-7c65d6cfc9-vdvvc" [e7ae489a-7c45-40fb-8676-05e0be28bead] Running
	I1213 19:05:04.510839   24117 system_pods.go:61] "csi-hostpath-attacher-0" [68f49318-ecc3-4639-960c-0e788a457273] Running
	I1213 19:05:04.510850   24117 system_pods.go:61] "csi-hostpath-resizer-0" [356b4293-7940-44f3-ac81-f9413d5cbf9b] Running
	I1213 19:05:04.510854   24117 system_pods.go:61] "csi-hostpathplugin-97tn6" [eea99428-236d-4e3e-bf78-139bc53a1565] Running
	I1213 19:05:04.510857   24117 system_pods.go:61] "etcd-addons-237678" [5a4f15e1-e00d-47a0-b1dd-b0905caf5d03] Running
	I1213 19:05:04.510861   24117 system_pods.go:61] "kindnet-f9dml" [74b975ef-1918-49e4-a81a-550827609fc1] Running
	I1213 19:05:04.510864   24117 system_pods.go:61] "kube-apiserver-addons-237678" [0ae41178-7528-4943-900c-27b5b826c8cd] Running
	I1213 19:05:04.510868   24117 system_pods.go:61] "kube-controller-manager-addons-237678" [77273d82-9ac6-463f-8899-6f7c685eea58] Running
	I1213 19:05:04.510871   24117 system_pods.go:61] "kube-ingress-dns-minikube" [e759fa09-c5fa-4e06-8839-edc1e904b62e] Running
	I1213 19:05:04.510874   24117 system_pods.go:61] "kube-proxy-8xhqt" [55f3abc6-9664-46cf-9750-c30ed47c57f0] Running
	I1213 19:05:04.510877   24117 system_pods.go:61] "kube-scheduler-addons-237678" [5711179f-7df5-4e84-9b46-fad638dea898] Running
	I1213 19:05:04.510880   24117 system_pods.go:61] "metrics-server-84c5f94fbc-p2h9p" [d3e6cf22-81c6-4dd9-8a14-2e6cb15543f0] Running
	I1213 19:05:04.510885   24117 system_pods.go:61] "nvidia-device-plugin-daemonset-5ppp7" [c9d2d640-a841-4988-aaab-2a74cbfe5596] Running
	I1213 19:05:04.510888   24117 system_pods.go:61] "registry-5cc95cd69-sgzjd" [dc9a854b-15a2-47cc-b4c8-0f7c608e5335] Running
	I1213 19:05:04.510891   24117 system_pods.go:61] "registry-proxy-nnht8" [c1db19b5-cb0e-4cec-b6fb-69ed544cf362] Running
	I1213 19:05:04.510895   24117 system_pods.go:61] "snapshot-controller-56fcc65765-c4x78" [b09a009d-8270-47b0-92a1-1a15522bed87] Running
	I1213 19:05:04.510899   24117 system_pods.go:61] "snapshot-controller-56fcc65765-f2dhs" [88f04c09-91f5-447a-8cd2-08494d44cdb7] Running
	I1213 19:05:04.510905   24117 system_pods.go:61] "storage-provisioner" [1721d202-3c96-45c0-a0bb-8a5664f3274b] Running
	I1213 19:05:04.510910   24117 system_pods.go:74] duration metric: took 3.383196961s to wait for pod list to return data ...
	I1213 19:05:04.510919   24117 default_sa.go:34] waiting for default service account to be created ...
	I1213 19:05:04.513317   24117 default_sa.go:45] found service account: "default"
	I1213 19:05:04.513339   24117 default_sa.go:55] duration metric: took 2.414259ms for default service account to be created ...
	I1213 19:05:04.513346   24117 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 19:05:04.521678   24117 system_pods.go:86] 19 kube-system pods found
	I1213 19:05:04.521707   24117 system_pods.go:89] "amd-gpu-device-plugin-bl7z9" [53b1759f-8dcc-4454-ba3e-6feaf74540e7] Running
	I1213 19:05:04.521714   24117 system_pods.go:89] "coredns-7c65d6cfc9-vdvvc" [e7ae489a-7c45-40fb-8676-05e0be28bead] Running
	I1213 19:05:04.521718   24117 system_pods.go:89] "csi-hostpath-attacher-0" [68f49318-ecc3-4639-960c-0e788a457273] Running
	I1213 19:05:04.521721   24117 system_pods.go:89] "csi-hostpath-resizer-0" [356b4293-7940-44f3-ac81-f9413d5cbf9b] Running
	I1213 19:05:04.521725   24117 system_pods.go:89] "csi-hostpathplugin-97tn6" [eea99428-236d-4e3e-bf78-139bc53a1565] Running
	I1213 19:05:04.521729   24117 system_pods.go:89] "etcd-addons-237678" [5a4f15e1-e00d-47a0-b1dd-b0905caf5d03] Running
	I1213 19:05:04.521733   24117 system_pods.go:89] "kindnet-f9dml" [74b975ef-1918-49e4-a81a-550827609fc1] Running
	I1213 19:05:04.521737   24117 system_pods.go:89] "kube-apiserver-addons-237678" [0ae41178-7528-4943-900c-27b5b826c8cd] Running
	I1213 19:05:04.521741   24117 system_pods.go:89] "kube-controller-manager-addons-237678" [77273d82-9ac6-463f-8899-6f7c685eea58] Running
	I1213 19:05:04.521745   24117 system_pods.go:89] "kube-ingress-dns-minikube" [e759fa09-c5fa-4e06-8839-edc1e904b62e] Running
	I1213 19:05:04.521749   24117 system_pods.go:89] "kube-proxy-8xhqt" [55f3abc6-9664-46cf-9750-c30ed47c57f0] Running
	I1213 19:05:04.521754   24117 system_pods.go:89] "kube-scheduler-addons-237678" [5711179f-7df5-4e84-9b46-fad638dea898] Running
	I1213 19:05:04.521758   24117 system_pods.go:89] "metrics-server-84c5f94fbc-p2h9p" [d3e6cf22-81c6-4dd9-8a14-2e6cb15543f0] Running
	I1213 19:05:04.521764   24117 system_pods.go:89] "nvidia-device-plugin-daemonset-5ppp7" [c9d2d640-a841-4988-aaab-2a74cbfe5596] Running
	I1213 19:05:04.521771   24117 system_pods.go:89] "registry-5cc95cd69-sgzjd" [dc9a854b-15a2-47cc-b4c8-0f7c608e5335] Running
	I1213 19:05:04.521774   24117 system_pods.go:89] "registry-proxy-nnht8" [c1db19b5-cb0e-4cec-b6fb-69ed544cf362] Running
	I1213 19:05:04.521781   24117 system_pods.go:89] "snapshot-controller-56fcc65765-c4x78" [b09a009d-8270-47b0-92a1-1a15522bed87] Running
	I1213 19:05:04.521784   24117 system_pods.go:89] "snapshot-controller-56fcc65765-f2dhs" [88f04c09-91f5-447a-8cd2-08494d44cdb7] Running
	I1213 19:05:04.521787   24117 system_pods.go:89] "storage-provisioner" [1721d202-3c96-45c0-a0bb-8a5664f3274b] Running
	I1213 19:05:04.521794   24117 system_pods.go:126] duration metric: took 8.442049ms to wait for k8s-apps to be running ...
	I1213 19:05:04.521803   24117 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 19:05:04.521847   24117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:05:04.533231   24117 system_svc.go:56] duration metric: took 11.418309ms WaitForService to wait for kubelet
	I1213 19:05:04.533263   24117 kubeadm.go:582] duration metric: took 2m9.907572714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:05:04.533282   24117 node_conditions.go:102] verifying NodePressure condition ...
	I1213 19:05:04.536474   24117 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 19:05:04.536508   24117 node_conditions.go:123] node cpu capacity is 8
	I1213 19:05:04.536522   24117 node_conditions.go:105] duration metric: took 3.235126ms to run NodePressure ...
	I1213 19:05:04.536537   24117 start.go:241] waiting for startup goroutines ...
	I1213 19:05:04.536547   24117 start.go:246] waiting for cluster config update ...
	I1213 19:05:04.536573   24117 start.go:255] writing updated cluster config ...
	I1213 19:05:04.536900   24117 ssh_runner.go:195] Run: rm -f paused
	I1213 19:05:04.585451   24117 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 19:05:04.588070   24117 out.go:177] * Done! kubectl is now configured to use "addons-237678" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 19:06:49 addons-237678 crio[1041]: time="2024-12-13 19:06:49.325776705Z" level=info msg="Removed pod sandbox: a776d12530a6e60714bb87f4bc8cbc3582e1e4fc853270d187459c3901f0e711" id=113c58dc-96ae-4e6d-97ba-2904a6fc1329 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:06:53 addons-237678 crio[1041]: time="2024-12-13 19:06:53.830835884Z" level=info msg="Stopping container: a0fc3854e20f9d4f60e5226800b127ef6991fa1929d848fd6be94f157ebc521a (timeout: 30s)" id=748b39f6-e82c-46b7-a577-494dc611bad6 name=/runtime.v1.RuntimeService/StopContainer
	Dec 13 19:06:53 addons-237678 conmon[4379]: conmon a0fc3854e20f9d4f60e5 <ninfo>: container 4391 exited with status 2
	Dec 13 19:06:53 addons-237678 crio[1041]: time="2024-12-13 19:06:53.964009605Z" level=info msg="Stopped container a0fc3854e20f9d4f60e5226800b127ef6991fa1929d848fd6be94f157ebc521a: default/cloud-spanner-emulator-dc5db94f4-8x4zb/cloud-spanner-emulator" id=748b39f6-e82c-46b7-a577-494dc611bad6 name=/runtime.v1.RuntimeService/StopContainer
	Dec 13 19:06:53 addons-237678 crio[1041]: time="2024-12-13 19:06:53.964556951Z" level=info msg="Stopping pod sandbox: 79bb56ade19af2ad425fadb36e38ec2343239e5a705382daa3db7c505dc5c384" id=10638f79-ac5e-4d0c-9682-896bed788602 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:06:53 addons-237678 crio[1041]: time="2024-12-13 19:06:53.964759740Z" level=info msg="Got pod network &{Name:cloud-spanner-emulator-dc5db94f4-8x4zb Namespace:default ID:79bb56ade19af2ad425fadb36e38ec2343239e5a705382daa3db7c505dc5c384 UID:a9365383-5691-4862-ac2c-cd2396490229 NetNS:/var/run/netns/17060021-11ce-4530-b96c-c2646348c645 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 13 19:06:53 addons-237678 crio[1041]: time="2024-12-13 19:06:53.964883749Z" level=info msg="Deleting pod default_cloud-spanner-emulator-dc5db94f4-8x4zb from CNI network \"kindnet\" (type=ptp)"
	Dec 13 19:06:54 addons-237678 crio[1041]: time="2024-12-13 19:06:54.004813136Z" level=info msg="Stopped pod sandbox: 79bb56ade19af2ad425fadb36e38ec2343239e5a705382daa3db7c505dc5c384" id=10638f79-ac5e-4d0c-9682-896bed788602 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:06:54 addons-237678 crio[1041]: time="2024-12-13 19:06:54.093899611Z" level=info msg="Removing container: a0fc3854e20f9d4f60e5226800b127ef6991fa1929d848fd6be94f157ebc521a" id=b110f351-4baa-4de8-b6ea-82d526491ab8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 19:06:54 addons-237678 crio[1041]: time="2024-12-13 19:06:54.107164850Z" level=info msg="Removed container a0fc3854e20f9d4f60e5226800b127ef6991fa1929d848fd6be94f157ebc521a: default/cloud-spanner-emulator-dc5db94f4-8x4zb/cloud-spanner-emulator" id=b110f351-4baa-4de8-b6ea-82d526491ab8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 19:07:49 addons-237678 crio[1041]: time="2024-12-13 19:07:49.328972560Z" level=info msg="Stopping pod sandbox: 79bb56ade19af2ad425fadb36e38ec2343239e5a705382daa3db7c505dc5c384" id=9a1436c1-3fcf-47b2-a6fa-9148e8cf8613 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:07:49 addons-237678 crio[1041]: time="2024-12-13 19:07:49.329026405Z" level=info msg="Stopped pod sandbox (already stopped): 79bb56ade19af2ad425fadb36e38ec2343239e5a705382daa3db7c505dc5c384" id=9a1436c1-3fcf-47b2-a6fa-9148e8cf8613 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:07:49 addons-237678 crio[1041]: time="2024-12-13 19:07:49.329308613Z" level=info msg="Removing pod sandbox: 79bb56ade19af2ad425fadb36e38ec2343239e5a705382daa3db7c505dc5c384" id=2e16b391-64ed-48f9-9e99-4d7a8929a59a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:07:49 addons-237678 crio[1041]: time="2024-12-13 19:07:49.337716653Z" level=info msg="Removed pod sandbox: 79bb56ade19af2ad425fadb36e38ec2343239e5a705382daa3db7c505dc5c384" id=2e16b391-64ed-48f9-9e99-4d7a8929a59a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:08:01 addons-237678 crio[1041]: time="2024-12-13 19:08:01.060075647Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-nw6xv/POD" id=22f7f1c8-6b9e-4466-8ae7-cadfcc13ea4a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 19:08:01 addons-237678 crio[1041]: time="2024-12-13 19:08:01.060147247Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 13 19:08:01 addons-237678 crio[1041]: time="2024-12-13 19:08:01.112411419Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-nw6xv Namespace:default ID:8f4a46f532ce14676a0f1329cb11d0e8a63d37e5b096c0bedd90774cc71c2395 UID:43a49a6a-8334-440e-81e7-cdde9e5928de NetNS:/var/run/netns/37afc8e8-25c0-4629-8eff-ff73ba9732cb Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 13 19:08:01 addons-237678 crio[1041]: time="2024-12-13 19:08:01.112445789Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-nw6xv to CNI network \"kindnet\" (type=ptp)"
	Dec 13 19:08:01 addons-237678 crio[1041]: time="2024-12-13 19:08:01.123676509Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-nw6xv Namespace:default ID:8f4a46f532ce14676a0f1329cb11d0e8a63d37e5b096c0bedd90774cc71c2395 UID:43a49a6a-8334-440e-81e7-cdde9e5928de NetNS:/var/run/netns/37afc8e8-25c0-4629-8eff-ff73ba9732cb Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 13 19:08:01 addons-237678 crio[1041]: time="2024-12-13 19:08:01.123864903Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-nw6xv for CNI network kindnet (type=ptp)"
	Dec 13 19:08:01 addons-237678 crio[1041]: time="2024-12-13 19:08:01.127384094Z" level=info msg="Ran pod sandbox 8f4a46f532ce14676a0f1329cb11d0e8a63d37e5b096c0bedd90774cc71c2395 with infra container: default/hello-world-app-55bf9c44b4-nw6xv/POD" id=22f7f1c8-6b9e-4466-8ae7-cadfcc13ea4a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 19:08:01 addons-237678 crio[1041]: time="2024-12-13 19:08:01.128417324Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=61c792c9-6bf4-4c43-8dec-a1a3f523c2fa name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:08:01 addons-237678 crio[1041]: time="2024-12-13 19:08:01.128605889Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=61c792c9-6bf4-4c43-8dec-a1a3f523c2fa name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:08:01 addons-237678 crio[1041]: time="2024-12-13 19:08:01.129589449Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=b0eda006-c9f7-4e1d-9c79-3d2250564e37 name=/runtime.v1.ImageService/PullImage
	Dec 13 19:08:01 addons-237678 crio[1041]: time="2024-12-13 19:08:01.139424851Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	28b1021684d14       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago       Running             nginx                     0                   8be5ed4885fab       nginx
	043729a8f1a04       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   710d41dd224bd       busybox
	709300fe7b4a2       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   57f2066db9846       ingress-nginx-controller-5f85ff4588-vp9bh
	8381eaef690aa       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   f663d45601fcd       kube-ingress-dns-minikube
	fa05dedb223af       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   29b6c4fe6108c       metrics-server-84c5f94fbc-p2h9p
	8204d33f325d1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              patch                     0                   ca2328f09e7d2       ingress-nginx-admission-patch-zpjz5
	8fda3ac0427bd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   4df4e8d8c5fb5       ingress-nginx-admission-create-xhsqd
	c480313fefdec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   bb9d16d7e5dff       coredns-7c65d6cfc9-vdvvc
	2534fa12b02a5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   b6917787d0dcd       storage-provisioner
	d87bbe9c87d8d       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3                           4 minutes ago       Running             kindnet-cni               0                   603c780b2c72a       kindnet-f9dml
	9f5557cd0de04       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             5 minutes ago       Running             kube-proxy                0                   275c47aca081f       kube-proxy-8xhqt
	b30b864697aec       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             5 minutes ago       Running             kube-scheduler            0                   c6c6b6d835e3e       kube-scheduler-addons-237678
	96317b3727960       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             5 minutes ago       Running             kube-controller-manager   0                   1243d5f6d7c66       kube-controller-manager-addons-237678
	768b5c4c34a15       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             5 minutes ago       Running             kube-apiserver            0                   e7a1cca37bfd5       kube-apiserver-addons-237678
	2c5f3c09909f8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   e6956a31cb336       etcd-addons-237678
	
	
	==> coredns [c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c] <==
	[INFO] 10.244.0.9:36127 - 52657 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009363s
	[INFO] 10.244.0.9:52071 - 16497 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004765621s
	[INFO] 10.244.0.9:52071 - 16862 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004990612s
	[INFO] 10.244.0.9:48586 - 8887 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003833018s
	[INFO] 10.244.0.9:48586 - 9162 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005910193s
	[INFO] 10.244.0.9:48806 - 36660 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005064688s
	[INFO] 10.244.0.9:48806 - 36325 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00512534s
	[INFO] 10.244.0.9:58757 - 39817 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000057175s
	[INFO] 10.244.0.9:58757 - 39550 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00008793s
	[INFO] 10.244.0.21:38837 - 57858 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000227254s
	[INFO] 10.244.0.21:57349 - 44307 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000310535s
	[INFO] 10.244.0.21:57041 - 45185 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00013722s
	[INFO] 10.244.0.21:33508 - 21865 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000178275s
	[INFO] 10.244.0.21:36779 - 11237 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111412s
	[INFO] 10.244.0.21:45042 - 26122 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000144218s
	[INFO] 10.244.0.21:57845 - 43458 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007908759s
	[INFO] 10.244.0.21:45028 - 30234 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.008444358s
	[INFO] 10.244.0.21:45360 - 64104 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00533748s
	[INFO] 10.244.0.21:39922 - 21066 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006040585s
	[INFO] 10.244.0.21:42952 - 59798 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005412136s
	[INFO] 10.244.0.21:53208 - 36878 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005469023s
	[INFO] 10.244.0.21:48330 - 38451 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000765363s
	[INFO] 10.244.0.21:49610 - 119 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000903039s
	[INFO] 10.244.0.25:55432 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000185988s
	[INFO] 10.244.0.25:56180 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126288s
	
	
	==> describe nodes <==
	Name:               addons-237678
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-237678
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=addons-237678
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_13T19_02_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-237678
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:02:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-237678
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:07:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Dec 2024 19:06:23 +0000   Fri, 13 Dec 2024 19:02:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Dec 2024 19:06:23 +0000   Fri, 13 Dec 2024 19:02:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Dec 2024 19:06:23 +0000   Fri, 13 Dec 2024 19:02:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Dec 2024 19:06:23 +0000   Fri, 13 Dec 2024 19:03:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-237678
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 a39f8add46c4434a84f945353a7f0dd2
	  System UUID:                3db003e5-459d-48ce-93a9-cf79d8436984
	  Boot ID:                    c9637a07-3c27-4cb7-b1b1-da5edcdac29f
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s
	  default                     hello-world-app-55bf9c44b4-nw6xv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-vp9bh    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         5m2s
	  kube-system                 coredns-7c65d6cfc9-vdvvc                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m8s
	  kube-system                 etcd-addons-237678                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m13s
	  kube-system                 kindnet-f9dml                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m8s
	  kube-system                 kube-apiserver-addons-237678                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-controller-manager-addons-237678        200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-proxy-8xhqt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-scheduler-addons-237678                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 metrics-server-84c5f94fbc-p2h9p              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         5m4s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m3s                   kube-proxy       
	  Normal   Starting                 5m18s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m18s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node addons-237678 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node addons-237678 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node addons-237678 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m13s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m13s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m13s                  kubelet          Node addons-237678 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m13s                  kubelet          Node addons-237678 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m13s                  kubelet          Node addons-237678 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m9s                   node-controller  Node addons-237678 event: Registered Node addons-237678 in Controller
	  Normal   NodeReady                4m49s                  kubelet          Node addons-237678 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000810] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000872] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000934] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000890] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.642001] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025181] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.037072] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.033073] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +7.269869] kauditd_printk_skb: 46 callbacks suppressed
	[Dec13 19:05] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa e4 f8 54 51 d4 5e c0 af 95 ac 60 08 00
	[  +1.027832] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa e4 f8 54 51 d4 5e c0 af 95 ac 60 08 00
	[  +2.015864] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa e4 f8 54 51 d4 5e c0 af 95 ac 60 08 00
	[  +4.159712] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: fa e4 f8 54 51 d4 5e c0 af 95 ac 60 08 00
	[Dec13 19:06] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa e4 f8 54 51 d4 5e c0 af 95 ac 60 08 00
	[ +16.122837] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa e4 f8 54 51 d4 5e c0 af 95 ac 60 08 00
	[ +33.533567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa e4 f8 54 51 d4 5e c0 af 95 ac 60 08 00
	
	
	==> etcd [2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06] <==
	{"level":"warn","ts":"2024-12-13T19:02:57.412382Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.79144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-8xhqt\" ","response":"range_response_count:1 size:4833"}
	{"level":"info","ts":"2024-12-13T19:02:57.424561Z","caller":"traceutil/trace.go:171","msg":"trace[518213457] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-8xhqt; range_end:; response_count:1; response_revision:412; }","duration":"194.963283ms","start":"2024-12-13T19:02:57.229581Z","end":"2024-12-13T19:02:57.424544Z","steps":["trace[518213457] 'agreement among raft nodes before linearized reading'  (duration: 182.76233ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:02:57.529280Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.117592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-12-13T19:02:57.531513Z","caller":"traceutil/trace.go:171","msg":"trace[141944507] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:412; }","duration":"107.372517ms","start":"2024-12-13T19:02:57.424132Z","end":"2024-12-13T19:02:57.531504Z","steps":["trace[141944507] 'range keys from in-memory index tree'  (duration: 105.051819ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:02:57.529614Z","caller":"traceutil/trace.go:171","msg":"trace[715618318] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"105.452251ms","start":"2024-12-13T19:02:57.424146Z","end":"2024-12-13T19:02:57.529598Z","steps":["trace[715618318] 'process raft request'  (duration: 83.784366ms)","trace[715618318] 'compare'  (duration: 21.103338ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-13T19:02:57.529755Z","caller":"traceutil/trace.go:171","msg":"trace[1011888473] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"105.349915ms","start":"2024-12-13T19:02:57.424396Z","end":"2024-12-13T19:02:57.529746Z","steps":["trace[1011888473] 'process raft request'  (duration: 105.049786ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:02:57.530150Z","caller":"traceutil/trace.go:171","msg":"trace[1053731736] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"105.6439ms","start":"2024-12-13T19:02:57.424495Z","end":"2024-12-13T19:02:57.530139Z","steps":["trace[1053731736] 'process raft request'  (duration: 105.007817ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:02:58.508033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.048898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-12-13T19:02:58.508215Z","caller":"traceutil/trace.go:171","msg":"trace[2101835278] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:475; }","duration":"189.236433ms","start":"2024-12-13T19:02:58.318964Z","end":"2024-12-13T19:02:58.508201Z","steps":["trace[2101835278] 'agreement among raft nodes before linearized reading'  (duration: 189.018458ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:02:58.508268Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.623514ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"warn","ts":"2024-12-13T19:02:58.508419Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.298571ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:02:58.508765Z","caller":"traceutil/trace.go:171","msg":"trace[285053497] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:475; }","duration":"189.639321ms","start":"2024-12-13T19:02:58.319112Z","end":"2024-12-13T19:02:58.508751Z","steps":["trace[285053497] 'agreement among raft nodes before linearized reading'  (duration: 189.284697ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:02:58.508104Z","caller":"traceutil/trace.go:171","msg":"trace[514971690] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"179.889947ms","start":"2024-12-13T19:02:58.328199Z","end":"2024-12-13T19:02:58.508089Z","steps":["trace[514971690] 'process raft request'  (duration: 95.211676ms)","trace[514971690] 'compare'  (duration: 84.133929ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-13T19:02:58.508059Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.633376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:02:58.509063Z","caller":"traceutil/trace.go:171","msg":"trace[1132366134] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io; range_end:; response_count:0; response_revision:475; }","duration":"189.639304ms","start":"2024-12-13T19:02:58.319412Z","end":"2024-12-13T19:02:58.509051Z","steps":["trace[1132366134] 'agreement among raft nodes before linearized reading'  (duration: 188.613809ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:02:58.508657Z","caller":"traceutil/trace.go:171","msg":"trace[1651366501] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:475; }","duration":"190.009596ms","start":"2024-12-13T19:02:58.318636Z","end":"2024-12-13T19:02:58.508646Z","steps":["trace[1651366501] 'agreement among raft nodes before linearized reading'  (duration: 189.607172ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:08.691693Z","caller":"traceutil/trace.go:171","msg":"trace[444463932] transaction","detail":"{read_only:false; response_revision:1133; number_of_response:1; }","duration":"104.310468ms","start":"2024-12-13T19:04:08.587356Z","end":"2024-12-13T19:04:08.691666Z","steps":["trace[444463932] 'process raft request'  (duration: 104.195414ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:08.897222Z","caller":"traceutil/trace.go:171","msg":"trace[453905227] transaction","detail":"{read_only:false; response_revision:1136; number_of_response:1; }","duration":"114.705815ms","start":"2024-12-13T19:04:08.782495Z","end":"2024-12-13T19:04:08.897201Z","steps":["trace[453905227] 'process raft request'  (duration: 35.816821ms)","trace[453905227] 'compare'  (duration: 78.780002ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-13T19:04:09.150471Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.202861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-12-13T19:04:09.150511Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.696987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/amd-gpu-device-plugin-bl7z9\" ","response":"range_response_count:1 size:4338"}
	{"level":"info","ts":"2024-12-13T19:04:09.150545Z","caller":"traceutil/trace.go:171","msg":"trace[2083324233] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1136; }","duration":"139.298512ms","start":"2024-12-13T19:04:09.011232Z","end":"2024-12-13T19:04:09.150531Z","steps":["trace[2083324233] 'range keys from in-memory index tree'  (duration: 139.089038ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:09.150560Z","caller":"traceutil/trace.go:171","msg":"trace[738570761] range","detail":"{range_begin:/registry/pods/kube-system/amd-gpu-device-plugin-bl7z9; range_end:; response_count:1; response_revision:1136; }","duration":"133.751762ms","start":"2024-12-13T19:04:09.016797Z","end":"2024-12-13T19:04:09.150548Z","steps":["trace[738570761] 'range keys from in-memory index tree'  (duration: 133.592583ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:46.892913Z","caller":"traceutil/trace.go:171","msg":"trace[604936375] transaction","detail":"{read_only:false; response_revision:1270; number_of_response:1; }","duration":"115.826269ms","start":"2024-12-13T19:04:46.777060Z","end":"2024-12-13T19:04:46.892887Z","steps":["trace[604936375] 'process raft request'  (duration: 115.561063ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:47.002866Z","caller":"traceutil/trace.go:171","msg":"trace[1006121228] transaction","detail":"{read_only:false; response_revision:1271; number_of_response:1; }","duration":"106.981938ms","start":"2024-12-13T19:04:46.895869Z","end":"2024-12-13T19:04:47.002851Z","steps":["trace[1006121228] 'process raft request'  (duration: 67.969982ms)","trace[1006121228] 'compare'  (duration: 38.924556ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-13T19:06:00.058196Z","caller":"traceutil/trace.go:171","msg":"trace[1753636216] transaction","detail":"{read_only:false; response_revision:1664; number_of_response:1; }","duration":"116.964381ms","start":"2024-12-13T19:05:59.941214Z","end":"2024-12-13T19:06:00.058179Z","steps":["trace[1753636216] 'process raft request'  (duration: 116.758092ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:08:02 up 50 min,  0 users,  load average: 0.23, 0.48, 0.27
	Linux addons-237678 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5] <==
	I1213 19:05:53.028474       1 main.go:301] handling current node
	I1213 19:06:03.028562       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:06:03.028594       1 main.go:301] handling current node
	I1213 19:06:13.035352       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:06:13.035391       1 main.go:301] handling current node
	I1213 19:06:23.031379       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:06:23.031423       1 main.go:301] handling current node
	I1213 19:06:33.028070       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:06:33.028113       1 main.go:301] handling current node
	I1213 19:06:43.028746       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:06:43.028786       1 main.go:301] handling current node
	I1213 19:06:53.030090       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:06:53.030123       1 main.go:301] handling current node
	I1213 19:07:03.028544       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:07:03.028589       1 main.go:301] handling current node
	I1213 19:07:13.035529       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:07:13.035562       1 main.go:301] handling current node
	I1213 19:07:23.035943       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:07:23.035978       1 main.go:301] handling current node
	I1213 19:07:33.036116       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:07:33.036157       1 main.go:301] handling current node
	I1213 19:07:43.035717       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:07:43.035754       1 main.go:301] handling current node
	I1213 19:07:53.037849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:07:53.037884       1 main.go:301] handling current node
	
	
	==> kube-apiserver [768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1213 19:04:53.953308       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.182.114:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.182.114:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.182.114:443: connect: connection refused" logger="UnhandledError"
	I1213 19:04:53.985224       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 19:05:15.282984       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43348: use of closed network connection
	E1213 19:05:15.441662       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43374: use of closed network connection
	I1213 19:05:24.411840       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.226.224"}
	I1213 19:05:30.166719       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1213 19:05:31.281752       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1213 19:05:35.602575       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1213 19:05:35.777233       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.134.99"}
	I1213 19:06:00.136341       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1213 19:06:14.882102       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1213 19:06:26.921832       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:26.921884       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:06:26.939583       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:26.939755       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:06:26.951069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:26.951126       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:06:26.963086       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:26.963122       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1213 19:06:27.940603       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1213 19:06:28.007543       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1213 19:06:28.014242       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1213 19:08:01.015249       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.253.170"}
	
	
	==> kube-controller-manager [96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986] <==
	E1213 19:06:49.223059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1213 19:06:53.789908       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-dc5db94f4" duration="12.9µs"
	I1213 19:06:53.945148       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1213 19:06:53.945179       1 shared_informer.go:320] Caches are synced for resource quota
	I1213 19:06:54.338741       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1213 19:06:54.338778       1 shared_informer.go:320] Caches are synced for garbage collector
	I1213 19:06:55.715194       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W1213 19:06:59.440303       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:06:59.440342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:07:04.790952       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:07:04.790995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:07:09.571216       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:07:09.571268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:07:35.989002       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:07:35.989045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:07:37.351458       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:07:37.351494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:07:41.485952       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:07:41.485995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:07:43.334528       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:07:43.334567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1213 19:08:00.759521       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.962803ms"
	I1213 19:08:00.764914       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.267598ms"
	I1213 19:08:00.765078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="52.116µs"
	I1213 19:08:00.765553       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="80.053µs"
	
	
	==> kube-proxy [9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65] <==
	I1213 19:02:56.911144       1 server_linux.go:66] "Using iptables proxy"
	I1213 19:02:58.111944       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1213 19:02:58.112087       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:02:58.531727       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 19:02:58.531788       1 server_linux.go:169] "Using iptables Proxier"
	I1213 19:02:58.610889       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:02:58.611812       1 server.go:483] "Version info" version="v1.31.2"
	I1213 19:02:58.612206       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:02:58.614953       1 config.go:199] "Starting service config controller"
	I1213 19:02:58.614972       1 config.go:328] "Starting node config controller"
	I1213 19:02:58.614974       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1213 19:02:58.614986       1 config.go:105] "Starting endpoint slice config controller"
	I1213 19:02:58.614995       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1213 19:02:58.614984       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1213 19:02:58.715834       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1213 19:02:58.715850       1 shared_informer.go:320] Caches are synced for node config
	I1213 19:02:58.715877       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d] <==
	W1213 19:02:46.829758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 19:02:46.829838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:46.829857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1213 19:02:46.829885       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:46.829971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1213 19:02:46.829978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1213 19:02:46.829995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1213 19:02:46.830000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:46.829575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 19:02:46.830031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:46.829592       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 19:02:46.830064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:46.829767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1213 19:02:46.830097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:46.830230       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1213 19:02:46.830254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:47.634169       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1213 19:02:47.634209       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1213 19:02:47.737007       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 19:02:47.737043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:47.867352       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 19:02:47.867388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:47.919781       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 19:02:47.919825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1213 19:02:50.228342       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 19:08:00 addons-237678 kubelet[1640]: E1213 19:08:00.758487    1640 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eea99428-236d-4e3e-bf78-139bc53a1565" containerName="hostpath"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: E1213 19:08:00.758496    1640 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc229adf-4b7f-4aa3-bac3-c252ef190de4" containerName="task-pv-container"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: E1213 19:08:00.758504    1640 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c9d2d640-a841-4988-aaab-2a74cbfe5596" containerName="nvidia-device-plugin-ctr"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: E1213 19:08:00.758513    1640 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a15c399-4206-4fcb-b491-db76261d0f58" containerName="local-path-provisioner"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: E1213 19:08:00.758528    1640 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="88f04c09-91f5-447a-8cd2-08494d44cdb7" containerName="volume-snapshot-controller"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: E1213 19:08:00.758537    1640 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eea99428-236d-4e3e-bf78-139bc53a1565" containerName="csi-external-health-monitor-controller"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: E1213 19:08:00.758547    1640 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="356b4293-7940-44f3-ac81-f9413d5cbf9b" containerName="csi-resizer"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: E1213 19:08:00.758556    1640 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="853482a3-7bc4-42eb-a36b-bac7dd740c94" containerName="yakd"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758612    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea99428-236d-4e3e-bf78-139bc53a1565" containerName="hostpath"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758623    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f49318-ecc3-4639-960c-0e788a457273" containerName="csi-attacher"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758632    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="853482a3-7bc4-42eb-a36b-bac7dd740c94" containerName="yakd"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758640    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea99428-236d-4e3e-bf78-139bc53a1565" containerName="node-driver-registrar"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758647    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea99428-236d-4e3e-bf78-139bc53a1565" containerName="csi-snapshotter"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758656    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a15c399-4206-4fcb-b491-db76261d0f58" containerName="local-path-provisioner"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758663    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9d2d640-a841-4988-aaab-2a74cbfe5596" containerName="nvidia-device-plugin-ctr"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758672    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9365383-5691-4862-ac2c-cd2396490229" containerName="cloud-spanner-emulator"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758679    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea99428-236d-4e3e-bf78-139bc53a1565" containerName="liveness-probe"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758686    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea99428-236d-4e3e-bf78-139bc53a1565" containerName="csi-provisioner"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758693    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="b09a009d-8270-47b0-92a1-1a15522bed87" containerName="volume-snapshot-controller"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758701    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f04c09-91f5-447a-8cd2-08494d44cdb7" containerName="volume-snapshot-controller"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758708    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="356b4293-7940-44f3-ac81-f9413d5cbf9b" containerName="csi-resizer"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758716    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="53b1759f-8dcc-4454-ba3e-6feaf74540e7" containerName="amd-gpu-device-plugin"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758724    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc229adf-4b7f-4aa3-bac3-c252ef190de4" containerName="task-pv-container"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.758730    1640 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea99428-236d-4e3e-bf78-139bc53a1565" containerName="csi-external-health-monitor-controller"
	Dec 13 19:08:00 addons-237678 kubelet[1640]: I1213 19:08:00.845955    1640 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmhgq\" (UniqueName: \"kubernetes.io/projected/43a49a6a-8334-440e-81e7-cdde9e5928de-kube-api-access-qmhgq\") pod \"hello-world-app-55bf9c44b4-nw6xv\" (UID: \"43a49a6a-8334-440e-81e7-cdde9e5928de\") " pod="default/hello-world-app-55bf9c44b4-nw6xv"
	
	
	==> storage-provisioner [2534fa12b02a5babd54edc685b232b2e6932f85b1d900f193792502cb9b3863d] <==
	I1213 19:03:14.317726       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 19:03:14.326350       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 19:03:14.326404       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 19:03:14.332512       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 19:03:14.332673       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-237678_4c270fd9-af4a-43bd-b164-6ce955f2bfb9!
	I1213 19:03:14.333652       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad7c13ff-a318-449b-9520-fc6d6f2d250a", APIVersion:"v1", ResourceVersion:"930", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-237678_4c270fd9-af4a-43bd-b164-6ce955f2bfb9 became leader
	I1213 19:03:14.432853       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-237678_4c270fd9-af4a-43bd-b164-6ce955f2bfb9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-237678 -n addons-237678
helpers_test.go:261: (dbg) Run:  kubectl --context addons-237678 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-nw6xv ingress-nginx-admission-create-xhsqd ingress-nginx-admission-patch-zpjz5
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-237678 describe pod hello-world-app-55bf9c44b4-nw6xv ingress-nginx-admission-create-xhsqd ingress-nginx-admission-patch-zpjz5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-237678 describe pod hello-world-app-55bf9c44b4-nw6xv ingress-nginx-admission-create-xhsqd ingress-nginx-admission-patch-zpjz5: exit status 1 (62.628656ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-nw6xv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-237678/192.168.49.2
	Start Time:       Fri, 13 Dec 2024 19:08:00 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qmhgq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qmhgq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-nw6xv to addons-237678
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xhsqd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-zpjz5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-237678 describe pod hello-world-app-55bf9c44b4-nw6xv ingress-nginx-admission-create-xhsqd ingress-nginx-admission-patch-zpjz5: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-237678 addons disable ingress-dns --alsologtostderr -v=1: (1.089987748s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-237678 addons disable ingress --alsologtostderr -v=1: (7.711323232s)
--- FAIL: TestAddons/parallel/Ingress (156.70s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (358.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.36162ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-p2h9p" [d3e6cf22-81c6-4dd9-8a14-2e6cb15543f0] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002639006s
addons_test.go:402: (dbg) Run:  kubectl --context addons-237678 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-237678 top pods -n kube-system: exit status 1 (66.238336ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bl7z9, age: 2m16.785591309s

                                                
                                                
** /stderr **
I1213 19:05:29.787853   22695 retry.go:31] will retry after 3.220496796s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-237678 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-237678 top pods -n kube-system: exit status 1 (57.803022ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bl7z9, age: 2m20.064622031s

                                                
                                                
** /stderr **
I1213 19:05:33.066880   22695 retry.go:31] will retry after 2.718809961s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-237678 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-237678 top pods -n kube-system: exit status 1 (61.073781ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bl7z9, age: 2m22.845338281s

                                                
                                                
** /stderr **
I1213 19:05:35.847326   22695 retry.go:31] will retry after 4.883955346s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-237678 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-237678 top pods -n kube-system: exit status 1 (84.609448ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bl7z9, age: 2m27.815219931s

                                                
                                                
** /stderr **
I1213 19:05:40.817016   22695 retry.go:31] will retry after 6.75235467s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-237678 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-237678 top pods -n kube-system: exit status 1 (61.775779ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bl7z9, age: 2m34.629874076s

                                                
                                                
** /stderr **
I1213 19:05:47.631779   22695 retry.go:31] will retry after 9.651766026s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-237678 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-237678 top pods -n kube-system: exit status 1 (57.776552ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bl7z9, age: 2m44.340339044s

                                                
                                                
** /stderr **
I1213 19:05:57.342244   22695 retry.go:31] will retry after 33.899907329s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-237678 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-237678 top pods -n kube-system: exit status 1 (57.815446ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bl7z9, age: 3m18.299100678s

                                                
                                                
** /stderr **
I1213 19:06:31.301295   22695 retry.go:31] will retry after 24.524150156s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-237678 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-237678 top pods -n kube-system: exit status 1 (57.397837ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-vdvvc, age: 4m1.882091873s

                                                
                                                
** /stderr **
I1213 19:06:55.883902   22695 retry.go:31] will retry after 53.037609372s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-237678 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-237678 top pods -n kube-system: exit status 1 (58.870676ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-vdvvc, age: 4m54.981977604s

                                                
                                                
** /stderr **
I1213 19:07:48.983904   22695 retry.go:31] will retry after 51.778840755s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-237678 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-237678 top pods -n kube-system: exit status 1 (56.833711ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-vdvvc, age: 5m46.823009936s

                                                
                                                
** /stderr **
I1213 19:08:40.825426   22695 retry.go:31] will retry after 1m12.252429721s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-237678 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-237678 top pods -n kube-system: exit status 1 (56.392522ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-vdvvc, age: 6m59.140439217s

                                                
                                                
** /stderr **
I1213 19:09:53.142415   22695 retry.go:31] will retry after 1m26.993321722s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-237678 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-237678 top pods -n kube-system: exit status 1 (57.319277ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-vdvvc, age: 8m26.194887931s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-237678
helpers_test.go:235: (dbg) docker inspect addons-237678:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aecc65461015a4bfe50052a4966ccaf7f88de0d25a6fe3fd4f4d3fbfa3731c03",
	        "Created": "2024-12-13T19:02:37.256027583Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 24880,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-13T19:02:37.391578376Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d489d36b1c808fdb46955d21247b1ea12cf0c774bbaa5d6d4f9ce6979fd65009",
	        "ResolvConfPath": "/var/lib/docker/containers/aecc65461015a4bfe50052a4966ccaf7f88de0d25a6fe3fd4f4d3fbfa3731c03/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aecc65461015a4bfe50052a4966ccaf7f88de0d25a6fe3fd4f4d3fbfa3731c03/hostname",
	        "HostsPath": "/var/lib/docker/containers/aecc65461015a4bfe50052a4966ccaf7f88de0d25a6fe3fd4f4d3fbfa3731c03/hosts",
	        "LogPath": "/var/lib/docker/containers/aecc65461015a4bfe50052a4966ccaf7f88de0d25a6fe3fd4f4d3fbfa3731c03/aecc65461015a4bfe50052a4966ccaf7f88de0d25a6fe3fd4f4d3fbfa3731c03-json.log",
	        "Name": "/addons-237678",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-237678:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-237678",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c95ca03e345dbfed2a087b9539313895733f40e5feaa702260d1f7acbd639d7b-init/diff:/var/lib/docker/overlay2/f762192c552406e923de3fcb2db2756770325685c188638c13eb19bc257f7ea1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c95ca03e345dbfed2a087b9539313895733f40e5feaa702260d1f7acbd639d7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c95ca03e345dbfed2a087b9539313895733f40e5feaa702260d1f7acbd639d7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c95ca03e345dbfed2a087b9539313895733f40e5feaa702260d1f7acbd639d7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-237678",
	                "Source": "/var/lib/docker/volumes/addons-237678/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-237678",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-237678",
	                "name.minikube.sigs.k8s.io": "addons-237678",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9787fc3be071ca8f943d62019dfabb149f8a0d20a3c8529f454e950668f8d26c",
	            "SandboxKey": "/var/run/docker/netns/9787fc3be071",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-237678": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "5eb4d2d8f8dd4d490eb0db6ef731064c7679e08089bdcf32fc89ea4ea2086677",
	                    "EndpointID": "0d2f33bbcdfe3f215b860828d7058288c48a373d7c50cff3e3c5c7c4a8e5ba90",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-237678",
	                        "aecc65461015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-237678 -n addons-237678
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-237678 logs -n 25: (1.088714863s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-509470                                                                   | download-docker-509470 | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:02 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-428326   | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC |                     |
	|         | binary-mirror-428326                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41935                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-428326                                                                     | binary-mirror-428326   | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:02 UTC |
	| addons  | enable dashboard -p                                                                         | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC |                     |
	|         | addons-237678                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC |                     |
	|         | addons-237678                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-237678 --wait=true                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | -p addons-237678                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-237678 addons                                                                        | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-237678 ip                                                                            | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-237678 ssh curl -s                                                                   | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ssh     | addons-237678 ssh cat                                                                       | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | /opt/local-path-provisioner/pvc-44be87ee-926f-4202-9a14-cc59be04dc06_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:06 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-237678 addons                                                                        | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-237678 addons                                                                        | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-237678 addons                                                                        | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-237678 addons                                                                        | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-237678 ip                                                                            | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:08 UTC | 13 Dec 24 19:08 UTC |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:08 UTC | 13 Dec 24 19:08 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-237678 addons disable                                                                | addons-237678          | jenkins | v1.34.0 | 13 Dec 24 19:08 UTC | 13 Dec 24 19:08 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 19:02:14
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:02:14.869381   24117 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:02:14.869508   24117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:02:14.869518   24117 out.go:358] Setting ErrFile to fd 2...
	I1213 19:02:14.869524   24117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:02:14.869704   24117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
	I1213 19:02:14.870288   24117 out.go:352] Setting JSON to false
	I1213 19:02:14.871089   24117 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2679,"bootTime":1734113856,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 19:02:14.871187   24117 start.go:139] virtualization: kvm guest
	I1213 19:02:14.873343   24117 out.go:177] * [addons-237678] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 19:02:14.874776   24117 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 19:02:14.874771   24117 notify.go:220] Checking for updates...
	I1213 19:02:14.876218   24117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:02:14.877612   24117 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-15903/kubeconfig
	I1213 19:02:14.878886   24117 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-15903/.minikube
	I1213 19:02:14.880245   24117 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 19:02:14.881525   24117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:02:14.882948   24117 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:02:14.903737   24117 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1213 19:02:14.903863   24117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:02:14.949872   24117 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-13 19:02:14.941037928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 19:02:14.949972   24117 docker.go:318] overlay module found
	I1213 19:02:14.952952   24117 out.go:177] * Using the docker driver based on user configuration
	I1213 19:02:14.954455   24117 start.go:297] selected driver: docker
	I1213 19:02:14.954468   24117 start.go:901] validating driver "docker" against <nil>
	I1213 19:02:14.954479   24117 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:02:14.955217   24117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:02:15.001239   24117 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-13 19:02:14.991614924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 19:02:15.001475   24117 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 19:02:15.001716   24117 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:02:15.003482   24117 out.go:177] * Using Docker driver with root privileges
	I1213 19:02:15.004742   24117 cni.go:84] Creating CNI manager for ""
	I1213 19:02:15.004811   24117 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:02:15.004826   24117 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 19:02:15.004896   24117 start.go:340] cluster config:
	{Name:addons-237678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-237678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:02:15.006206   24117 out.go:177] * Starting "addons-237678" primary control-plane node in "addons-237678" cluster
	I1213 19:02:15.007232   24117 cache.go:121] Beginning downloading kic base image for docker with crio
	I1213 19:02:15.008465   24117 out.go:177] * Pulling base image v0.0.45-1734029593-20090 ...
	I1213 19:02:15.009629   24117 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:02:15.009655   24117 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon
	I1213 19:02:15.009674   24117 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-15903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1213 19:02:15.009686   24117 cache.go:56] Caching tarball of preloaded images
	I1213 19:02:15.009776   24117 preload.go:172] Found /home/jenkins/minikube-integration/20090-15903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 19:02:15.009791   24117 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1213 19:02:15.010129   24117 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/config.json ...
	I1213 19:02:15.010155   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/config.json: {Name:mk08cc8c3b1749a2d5b51432634b107fe06d2d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:15.025309   24117 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 to local cache
	I1213 19:02:15.025419   24117 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory
	I1213 19:02:15.025434   24117 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory, skipping pull
	I1213 19:02:15.025438   24117 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 exists in cache, skipping pull
	I1213 19:02:15.025447   24117 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 as a tarball
	I1213 19:02:15.025455   24117 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 from local cache
	I1213 19:02:27.446756   24117 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 from cached tarball
	I1213 19:02:27.446791   24117 cache.go:194] Successfully downloaded all kic artifacts
	I1213 19:02:27.446820   24117 start.go:360] acquireMachinesLock for addons-237678: {Name:mk9d17c191be779336b39fc07058cf7c6bc54007 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:02:27.446913   24117 start.go:364] duration metric: took 75.192µs to acquireMachinesLock for "addons-237678"
	I1213 19:02:27.446954   24117 start.go:93] Provisioning new machine with config: &{Name:addons-237678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-237678 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:02:27.447043   24117 start.go:125] createHost starting for "" (driver="docker")
	I1213 19:02:27.448954   24117 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1213 19:02:27.449165   24117 start.go:159] libmachine.API.Create for "addons-237678" (driver="docker")
	I1213 19:02:27.449192   24117 client.go:168] LocalClient.Create starting
	I1213 19:02:27.449279   24117 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca.pem
	I1213 19:02:27.608485   24117 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/cert.pem
	I1213 19:02:27.762981   24117 cli_runner.go:164] Run: docker network inspect addons-237678 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 19:02:27.779101   24117 cli_runner.go:211] docker network inspect addons-237678 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 19:02:27.779175   24117 network_create.go:284] running [docker network inspect addons-237678] to gather additional debugging logs...
	I1213 19:02:27.779204   24117 cli_runner.go:164] Run: docker network inspect addons-237678
	W1213 19:02:27.794856   24117 cli_runner.go:211] docker network inspect addons-237678 returned with exit code 1
	I1213 19:02:27.794895   24117 network_create.go:287] error running [docker network inspect addons-237678]: docker network inspect addons-237678: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-237678 not found
	I1213 19:02:27.794912   24117 network_create.go:289] output of [docker network inspect addons-237678]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-237678 not found
	
	** /stderr **
	I1213 19:02:27.795000   24117 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:02:27.810774   24117 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a8e350}
	I1213 19:02:27.810816   24117 network_create.go:124] attempt to create docker network addons-237678 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 19:02:27.810853   24117 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-237678 addons-237678
	I1213 19:02:28.157571   24117 network_create.go:108] docker network addons-237678 192.168.49.0/24 created
	I1213 19:02:28.157596   24117 kic.go:121] calculated static IP "192.168.49.2" for the "addons-237678" container
	I1213 19:02:28.157661   24117 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 19:02:28.172992   24117 cli_runner.go:164] Run: docker volume create addons-237678 --label name.minikube.sigs.k8s.io=addons-237678 --label created_by.minikube.sigs.k8s.io=true
	I1213 19:02:28.211186   24117 oci.go:103] Successfully created a docker volume addons-237678
	I1213 19:02:28.211293   24117 cli_runner.go:164] Run: docker run --rm --name addons-237678-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-237678 --entrypoint /usr/bin/test -v addons-237678:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -d /var/lib
	I1213 19:02:32.655361   24117 cli_runner.go:217] Completed: docker run --rm --name addons-237678-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-237678 --entrypoint /usr/bin/test -v addons-237678:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -d /var/lib: (4.444030941s)
	I1213 19:02:32.655389   24117 oci.go:107] Successfully prepared a docker volume addons-237678
	I1213 19:02:32.655405   24117 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:02:32.655422   24117 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 19:02:32.655467   24117 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20090-15903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-237678:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 19:02:37.199678   24117 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20090-15903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-237678:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -I lz4 -xf /preloaded.tar -C /extractDir: (4.544174785s)
	I1213 19:02:37.199708   24117 kic.go:203] duration metric: took 4.544281704s to extract preloaded images to volume ...
	W1213 19:02:37.199838   24117 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 19:02:37.199960   24117 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 19:02:37.241444   24117 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-237678 --name addons-237678 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-237678 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-237678 --network addons-237678 --ip 192.168.49.2 --volume addons-237678:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9
	I1213 19:02:37.571702   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Running}}
	I1213 19:02:37.589340   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:37.608011   24117 cli_runner.go:164] Run: docker exec addons-237678 stat /var/lib/dpkg/alternatives/iptables
	I1213 19:02:37.647850   24117 oci.go:144] the created container "addons-237678" has a running status.
	I1213 19:02:37.647886   24117 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa...
	I1213 19:02:37.875540   24117 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 19:02:37.897987   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:37.919387   24117 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 19:02:37.919413   24117 kic_runner.go:114] Args: [docker exec --privileged addons-237678 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 19:02:38.017417   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:38.038563   24117 machine.go:93] provisionDockerMachine start ...
	I1213 19:02:38.038656   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:38.057310   24117 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:38.057504   24117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 19:02:38.057517   24117 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 19:02:38.198760   24117 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-237678
	
	I1213 19:02:38.198800   24117 ubuntu.go:169] provisioning hostname "addons-237678"
	I1213 19:02:38.198866   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:38.217589   24117 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:38.217770   24117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 19:02:38.217784   24117 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-237678 && echo "addons-237678" | sudo tee /etc/hostname
	I1213 19:02:38.369038   24117 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-237678
	
	I1213 19:02:38.369100   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:38.386556   24117 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:38.386757   24117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 19:02:38.386781   24117 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-237678' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-237678/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-237678' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:02:38.519356   24117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:02:38.519382   24117 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20090-15903/.minikube CaCertPath:/home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20090-15903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20090-15903/.minikube}
	I1213 19:02:38.519421   24117 ubuntu.go:177] setting up certificates
	I1213 19:02:38.519433   24117 provision.go:84] configureAuth start
	I1213 19:02:38.519483   24117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-237678
	I1213 19:02:38.536086   24117 provision.go:143] copyHostCerts
	I1213 19:02:38.536151   24117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20090-15903/.minikube/ca.pem (1078 bytes)
	I1213 19:02:38.536252   24117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20090-15903/.minikube/cert.pem (1123 bytes)
	I1213 19:02:38.536317   24117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20090-15903/.minikube/key.pem (1675 bytes)
	I1213 19:02:38.536371   24117 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20090-15903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca-key.pem org=jenkins.addons-237678 san=[127.0.0.1 192.168.49.2 addons-237678 localhost minikube]
	I1213 19:02:38.629249   24117 provision.go:177] copyRemoteCerts
	I1213 19:02:38.629309   24117 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:02:38.629342   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:38.646327   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:38.743189   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 19:02:38.764090   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 19:02:38.784954   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 19:02:38.806073   24117 provision.go:87] duration metric: took 286.618153ms to configureAuth
	I1213 19:02:38.806103   24117 ubuntu.go:193] setting minikube options for container-runtime
	I1213 19:02:38.806267   24117 config.go:182] Loaded profile config "addons-237678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:02:38.806357   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:38.822926   24117 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:38.823106   24117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 19:02:38.823125   24117 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:02:39.041385   24117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:02:39.041405   24117 machine.go:96] duration metric: took 1.002820912s to provisionDockerMachine
	I1213 19:02:39.041415   24117 client.go:171] duration metric: took 11.592217765s to LocalClient.Create
	I1213 19:02:39.041426   24117 start.go:167] duration metric: took 11.592262718s to libmachine.API.Create "addons-237678"
	I1213 19:02:39.041432   24117 start.go:293] postStartSetup for "addons-237678" (driver="docker")
	I1213 19:02:39.041441   24117 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:02:39.041484   24117 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:02:39.041518   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:39.058582   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:39.155480   24117 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:02:39.158701   24117 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:02:39.158732   24117 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1213 19:02:39.158740   24117 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1213 19:02:39.158749   24117 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1213 19:02:39.158764   24117 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-15903/.minikube/addons for local assets ...
	I1213 19:02:39.158819   24117 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-15903/.minikube/files for local assets ...
	I1213 19:02:39.158846   24117 start.go:296] duration metric: took 117.408146ms for postStartSetup
	I1213 19:02:39.159139   24117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-237678
	I1213 19:02:39.176017   24117 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/config.json ...
	I1213 19:02:39.176258   24117 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:02:39.176302   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:39.193919   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:39.287765   24117 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:02:39.291735   24117 start.go:128] duration metric: took 11.844676452s to createHost
	I1213 19:02:39.291762   24117 start.go:83] releasing machines lock for "addons-237678", held for 11.844837676s
	I1213 19:02:39.291828   24117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-237678
	I1213 19:02:39.308742   24117 ssh_runner.go:195] Run: cat /version.json
	I1213 19:02:39.308794   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:39.308823   24117 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:02:39.308892   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:39.327200   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:39.327775   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:39.418859   24117 ssh_runner.go:195] Run: systemctl --version
	I1213 19:02:39.422729   24117 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:02:39.559873   24117 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 19:02:39.563931   24117 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:02:39.580847   24117 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1213 19:02:39.580935   24117 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:02:39.606376   24117 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1213 19:02:39.606398   24117 start.go:495] detecting cgroup driver to use...
	I1213 19:02:39.606425   24117 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:02:39.606461   24117 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:02:39.619458   24117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:02:39.629006   24117 docker.go:217] disabling cri-docker service (if available) ...
	I1213 19:02:39.629051   24117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:02:39.640415   24117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:02:39.652480   24117 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:02:39.724111   24117 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:02:39.800484   24117 docker.go:233] disabling docker service ...
	I1213 19:02:39.800548   24117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:02:39.817683   24117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:02:39.827869   24117 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:02:39.900341   24117 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:02:39.980660   24117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:02:39.990327   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:02:40.005441   24117 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1213 19:02:40.005503   24117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:40.013979   24117 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:02:40.014039   24117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:40.022604   24117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:40.031296   24117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:40.039734   24117 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:02:40.047895   24117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:40.056094   24117 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:40.069660   24117 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:40.077947   24117 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:02:40.085058   24117 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 19:02:40.085115   24117 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 19:02:40.097525   24117 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:02:40.105057   24117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:02:40.174877   24117 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:02:40.275535   24117 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:02:40.275605   24117 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:02:40.278616   24117 start.go:563] Will wait 60s for crictl version
	I1213 19:02:40.278661   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:02:40.281538   24117 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 19:02:40.312723   24117 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1213 19:02:40.312812   24117 ssh_runner.go:195] Run: crio --version
	I1213 19:02:40.346272   24117 ssh_runner.go:195] Run: crio --version
	I1213 19:02:40.379851   24117 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1213 19:02:40.381328   24117 cli_runner.go:164] Run: docker network inspect addons-237678 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:02:40.397635   24117 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 19:02:40.400996   24117 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:02:40.410607   24117 kubeadm.go:883] updating cluster {Name:addons-237678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-237678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 19:02:40.410720   24117 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:02:40.410772   24117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:02:40.472930   24117 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:02:40.472956   24117 crio.go:433] Images already preloaded, skipping extraction
	I1213 19:02:40.473044   24117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:02:40.506189   24117 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:02:40.506210   24117 cache_images.go:84] Images are preloaded, skipping loading
	I1213 19:02:40.506217   24117 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1213 19:02:40.506292   24117 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-237678 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-237678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:02:40.506358   24117 ssh_runner.go:195] Run: crio config
	I1213 19:02:40.545000   24117 cni.go:84] Creating CNI manager for ""
	I1213 19:02:40.545020   24117 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:02:40.545036   24117 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 19:02:40.545058   24117 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-237678 NodeName:addons-237678 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 19:02:40.545173   24117 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-237678"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 19:02:40.545236   24117 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 19:02:40.552922   24117 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 19:02:40.552985   24117 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 19:02:40.560532   24117 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 19:02:40.576013   24117 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:02:40.591443   24117 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1213 19:02:40.606683   24117 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 19:02:40.609692   24117 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:02:40.618870   24117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:02:40.695564   24117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:02:40.707068   24117 certs.go:68] Setting up /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678 for IP: 192.168.49.2
	I1213 19:02:40.707092   24117 certs.go:194] generating shared ca certs ...
	I1213 19:02:40.707113   24117 certs.go:226] acquiring lock for ca certs: {Name:mk2fbaac84ab0753d470e1940d79f7bab81bd059 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:40.707258   24117 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20090-15903/.minikube/ca.key
	I1213 19:02:40.943570   24117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-15903/.minikube/ca.crt ...
	I1213 19:02:40.943602   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/ca.crt: {Name:mkdb34501d4529e4f582fc9651a84aaa3424c28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:40.943769   24117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-15903/.minikube/ca.key ...
	I1213 19:02:40.943779   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/ca.key: {Name:mk2e973a83de73ccad632e5b26aff21214d2bdc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:40.943850   24117 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20090-15903/.minikube/proxy-client-ca.key
	I1213 19:02:41.084013   24117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-15903/.minikube/proxy-client-ca.crt ...
	I1213 19:02:41.084040   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/proxy-client-ca.crt: {Name:mk3c2611246939751ed236e914b6e8b65b3fc451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.084205   24117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-15903/.minikube/proxy-client-ca.key ...
	I1213 19:02:41.084216   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/proxy-client-ca.key: {Name:mk7464522b7ff8a643d52f3c19186a8d46486aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.084284   24117 certs.go:256] generating profile certs ...
	I1213 19:02:41.084336   24117 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.key
	I1213 19:02:41.084350   24117 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt with IP's: []
	I1213 19:02:41.214303   24117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt ...
	I1213 19:02:41.214331   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: {Name:mk785f1592568ee3f28a7bac32c45dd7c605fa94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.214475   24117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.key ...
	I1213 19:02:41.214484   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.key: {Name:mk96a6dfd7700d17587300963698b5d2cfb8a38d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.214550   24117 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.key.52e9ce70
	I1213 19:02:41.214569   24117 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.crt.52e9ce70 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 19:02:41.336159   24117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.crt.52e9ce70 ...
	I1213 19:02:41.336189   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.crt.52e9ce70: {Name:mkb3a9df19cbc8acf913abf9a3a879b3ccb711bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.336346   24117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.key.52e9ce70 ...
	I1213 19:02:41.336360   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.key.52e9ce70: {Name:mkb33c54c0d6c298791786897d053bb1ca298d8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.336429   24117 certs.go:381] copying /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.crt.52e9ce70 -> /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.crt
	I1213 19:02:41.336498   24117 certs.go:385] copying /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.key.52e9ce70 -> /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.key
	I1213 19:02:41.336548   24117 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.key
	I1213 19:02:41.336565   24117 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.crt with IP's: []
	I1213 19:02:41.400015   24117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.crt ...
	I1213 19:02:41.400044   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.crt: {Name:mk1d7f6e55002a189386cb19a8bb439c3435565c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.400196   24117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.key ...
	I1213 19:02:41.400206   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.key: {Name:mk040a144459cc8a1de1c98c510410be1ef4314a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:41.400367   24117 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca-key.pem (1679 bytes)
	I1213 19:02:41.400403   24117 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/ca.pem (1078 bytes)
	I1213 19:02:41.400426   24117 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:02:41.400453   24117 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-15903/.minikube/certs/key.pem (1675 bytes)
	I1213 19:02:41.401069   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:02:41.422176   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 19:02:41.442572   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:02:41.463146   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 19:02:41.483523   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 19:02:41.504555   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:02:41.525683   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:02:41.549135   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 19:02:41.570083   24117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-15903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:02:41.591494   24117 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 19:02:41.606545   24117 ssh_runner.go:195] Run: openssl version
	I1213 19:02:41.611344   24117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 19:02:41.619519   24117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:02:41.622595   24117 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:02 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:02:41.622638   24117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:02:41.628818   24117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 19:02:41.637212   24117 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:02:41.640237   24117 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 19:02:41.640284   24117 kubeadm.go:392] StartCluster: {Name:addons-237678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-237678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:02:41.640376   24117 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 19:02:41.640414   24117 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:02:41.672019   24117 cri.go:89] found id: ""
	I1213 19:02:41.672086   24117 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 19:02:41.679799   24117 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 19:02:41.687191   24117 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1213 19:02:41.687232   24117 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 19:02:41.694554   24117 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 19:02:41.694570   24117 kubeadm.go:157] found existing configuration files:
	
	I1213 19:02:41.694605   24117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 19:02:41.702006   24117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 19:02:41.702058   24117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 19:02:41.709216   24117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 19:02:41.716648   24117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 19:02:41.716706   24117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 19:02:41.724068   24117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 19:02:41.732969   24117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 19:02:41.733021   24117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 19:02:41.740280   24117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 19:02:41.747725   24117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 19:02:41.747780   24117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 19:02:41.755007   24117 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 19:02:41.804655   24117 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1213 19:02:41.851914   24117 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 19:02:49.774295   24117 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1213 19:02:49.774365   24117 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 19:02:49.774463   24117 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1213 19:02:49.774523   24117 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1213 19:02:49.774558   24117 kubeadm.go:310] OS: Linux
	I1213 19:02:49.774599   24117 kubeadm.go:310] CGROUPS_CPU: enabled
	I1213 19:02:49.774651   24117 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1213 19:02:49.774691   24117 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1213 19:02:49.774738   24117 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1213 19:02:49.774779   24117 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1213 19:02:49.774823   24117 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1213 19:02:49.774888   24117 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1213 19:02:49.774976   24117 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1213 19:02:49.775046   24117 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1213 19:02:49.775155   24117 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 19:02:49.775269   24117 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 19:02:49.775377   24117 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 19:02:49.775437   24117 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 19:02:49.777389   24117 out.go:235]   - Generating certificates and keys ...
	I1213 19:02:49.777475   24117 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 19:02:49.777552   24117 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 19:02:49.777614   24117 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 19:02:49.777669   24117 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1213 19:02:49.777724   24117 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1213 19:02:49.777771   24117 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1213 19:02:49.777831   24117 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1213 19:02:49.777937   24117 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-237678 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 19:02:49.778030   24117 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1213 19:02:49.778247   24117 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-237678 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 19:02:49.778318   24117 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 19:02:49.778376   24117 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 19:02:49.778416   24117 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1213 19:02:49.778466   24117 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 19:02:49.778510   24117 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 19:02:49.778557   24117 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 19:02:49.778608   24117 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 19:02:49.778664   24117 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 19:02:49.778713   24117 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 19:02:49.778783   24117 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 19:02:49.778843   24117 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 19:02:49.780499   24117 out.go:235]   - Booting up control plane ...
	I1213 19:02:49.780604   24117 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 19:02:49.780681   24117 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 19:02:49.780744   24117 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 19:02:49.780846   24117 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 19:02:49.780986   24117 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 19:02:49.781059   24117 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 19:02:49.781222   24117 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 19:02:49.781333   24117 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 19:02:49.781425   24117 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.621764ms
	I1213 19:02:49.781534   24117 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1213 19:02:49.781624   24117 kubeadm.go:310] [api-check] The API server is healthy after 4.001359014s
	I1213 19:02:49.781739   24117 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 19:02:49.781874   24117 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 19:02:49.781965   24117 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 19:02:49.782154   24117 kubeadm.go:310] [mark-control-plane] Marking the node addons-237678 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 19:02:49.782245   24117 kubeadm.go:310] [bootstrap-token] Using token: ufky5y.p8vtytenxjrrx9g5
	I1213 19:02:49.784910   24117 out.go:235]   - Configuring RBAC rules ...
	I1213 19:02:49.785025   24117 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 19:02:49.785143   24117 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 19:02:49.785322   24117 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 19:02:49.785487   24117 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 19:02:49.785621   24117 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 19:02:49.785730   24117 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 19:02:49.785866   24117 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 19:02:49.785928   24117 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1213 19:02:49.785996   24117 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1213 19:02:49.786006   24117 kubeadm.go:310] 
	I1213 19:02:49.786088   24117 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1213 19:02:49.786100   24117 kubeadm.go:310] 
	I1213 19:02:49.786163   24117 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1213 19:02:49.786169   24117 kubeadm.go:310] 
	I1213 19:02:49.786190   24117 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1213 19:02:49.786244   24117 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 19:02:49.786290   24117 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 19:02:49.786296   24117 kubeadm.go:310] 
	I1213 19:02:49.786340   24117 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1213 19:02:49.786346   24117 kubeadm.go:310] 
	I1213 19:02:49.786388   24117 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 19:02:49.786394   24117 kubeadm.go:310] 
	I1213 19:02:49.786441   24117 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1213 19:02:49.786512   24117 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 19:02:49.786599   24117 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 19:02:49.786609   24117 kubeadm.go:310] 
	I1213 19:02:49.786685   24117 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 19:02:49.786788   24117 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1213 19:02:49.786799   24117 kubeadm.go:310] 
	I1213 19:02:49.786866   24117 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ufky5y.p8vtytenxjrrx9g5 \
	I1213 19:02:49.786952   24117 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:638961caa3d3d382bee193acde3e67d6eb5a416d1c68186140e9cf3d3b49b876 \
	I1213 19:02:49.786972   24117 kubeadm.go:310] 	--control-plane 
	I1213 19:02:49.786978   24117 kubeadm.go:310] 
	I1213 19:02:49.787051   24117 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1213 19:02:49.787057   24117 kubeadm.go:310] 
	I1213 19:02:49.787123   24117 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ufky5y.p8vtytenxjrrx9g5 \
	I1213 19:02:49.787234   24117 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:638961caa3d3d382bee193acde3e67d6eb5a416d1c68186140e9cf3d3b49b876 
	I1213 19:02:49.787245   24117 cni.go:84] Creating CNI manager for ""
	I1213 19:02:49.787252   24117 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:02:49.788982   24117 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1213 19:02:49.790292   24117 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 19:02:49.793850   24117 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1213 19:02:49.793866   24117 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 19:02:49.810277   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 19:02:49.998712   24117 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 19:02:49.998761   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:49.998830   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-237678 minikube.k8s.io/updated_at=2024_12_13T19_02_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956 minikube.k8s.io/name=addons-237678 minikube.k8s.io/primary=true
	I1213 19:02:50.062630   24117 ops.go:34] apiserver oom_adj: -16
	I1213 19:02:50.062777   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:50.563107   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:51.063793   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:51.563181   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:52.063061   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:52.562909   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:53.062964   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:53.563100   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:54.063613   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:54.563852   24117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:02:54.624663   24117 kubeadm.go:1113] duration metric: took 4.625944665s to wait for elevateKubeSystemPrivileges
	I1213 19:02:54.624714   24117 kubeadm.go:394] duration metric: took 12.984432698s to StartCluster
	I1213 19:02:54.624738   24117 settings.go:142] acquiring lock: {Name:mk1d582ab037339c5185379bff3c01140f06f006 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:54.624874   24117 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-15903/kubeconfig
	I1213 19:02:54.625413   24117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-15903/kubeconfig: {Name:mka9db62e71382b1e468379ab2f4120f5c10e65e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:54.625628   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 19:02:54.625656   24117 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:02:54.625714   24117 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 19:02:54.625836   24117 addons.go:69] Setting yakd=true in profile "addons-237678"
	I1213 19:02:54.625850   24117 addons.go:69] Setting cloud-spanner=true in profile "addons-237678"
	I1213 19:02:54.625869   24117 addons.go:234] Setting addon cloud-spanner=true in "addons-237678"
	I1213 19:02:54.625872   24117 addons.go:69] Setting metrics-server=true in profile "addons-237678"
	I1213 19:02:54.625877   24117 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-237678"
	I1213 19:02:54.625898   24117 addons.go:234] Setting addon metrics-server=true in "addons-237678"
	I1213 19:02:54.625901   24117 addons.go:69] Setting default-storageclass=true in profile "addons-237678"
	I1213 19:02:54.625905   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.625904   24117 config.go:182] Loaded profile config "addons-237678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:02:54.625916   24117 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-237678"
	I1213 19:02:54.625921   24117 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-237678"
	I1213 19:02:54.625930   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.625948   24117 addons.go:69] Setting ingress-dns=true in profile "addons-237678"
	I1213 19:02:54.625958   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.625964   24117 addons.go:234] Setting addon ingress-dns=true in "addons-237678"
	I1213 19:02:54.625995   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.626041   24117 addons.go:69] Setting storage-provisioner=true in profile "addons-237678"
	I1213 19:02:54.626063   24117 addons.go:234] Setting addon storage-provisioner=true in "addons-237678"
	I1213 19:02:54.626091   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.626272   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626436   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626451   24117 addons.go:69] Setting inspektor-gadget=true in profile "addons-237678"
	I1213 19:02:54.626455   24117 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-237678"
	I1213 19:02:54.626462   24117 addons.go:69] Setting volcano=true in profile "addons-237678"
	I1213 19:02:54.626465   24117 addons.go:234] Setting addon inspektor-gadget=true in "addons-237678"
	I1213 19:02:54.626451   24117 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-237678"
	I1213 19:02:54.626479   24117 addons.go:234] Setting addon volcano=true in "addons-237678"
	I1213 19:02:54.626492   24117 addons.go:69] Setting registry=true in profile "addons-237678"
	I1213 19:02:54.626496   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.626503   24117 addons.go:234] Setting addon registry=true in "addons-237678"
	I1213 19:02:54.626515   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.626522   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626528   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.626546   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626439   24117 addons.go:234] Setting addon yakd=true in "addons-237678"
	I1213 19:02:54.626953   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626958   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.627126   24117 addons.go:69] Setting gcp-auth=true in profile "addons-237678"
	I1213 19:02:54.627129   24117 addons.go:69] Setting volumesnapshots=true in profile "addons-237678"
	I1213 19:02:54.627143   24117 addons.go:234] Setting addon volumesnapshots=true in "addons-237678"
	I1213 19:02:54.627146   24117 mustload.go:65] Loading cluster: addons-237678
	I1213 19:02:54.627169   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.627327   24117 config.go:182] Loaded profile config "addons-237678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:02:54.627450   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.627633   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.627706   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626467   24117 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-237678"
	I1213 19:02:54.627926   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.628457   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.625838   24117 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-237678"
	I1213 19:02:54.628791   24117 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-237678"
	I1213 19:02:54.628825   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.626441   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.631630   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.632350   24117 out.go:177] * Verifying Kubernetes components...
	I1213 19:02:54.625881   24117 addons.go:69] Setting ingress=true in profile "addons-237678"
	I1213 19:02:54.632765   24117 addons.go:234] Setting addon ingress=true in "addons-237678"
	I1213 19:02:54.626440   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626924   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.626481   24117 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-237678"
	I1213 19:02:54.632832   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.634332   24117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:02:54.655702   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.655979   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.656911   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.662674   24117 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 19:02:54.662741   24117 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1213 19:02:54.662865   24117 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1213 19:02:54.664384   24117 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 19:02:54.664405   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 19:02:54.664453   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.665058   24117 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 19:02:54.665102   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1213 19:02:54.665156   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.665702   24117 out.go:177]   - Using image docker.io/registry:2.8.3
	I1213 19:02:54.667186   24117 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1213 19:02:54.667341   24117 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 19:02:54.667357   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 19:02:54.667404   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.668402   24117 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 19:02:54.668418   24117 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 19:02:54.668461   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.672445   24117 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 19:02:54.676045   24117 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 19:02:54.676077   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 19:02:54.676136   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.689294   24117 addons.go:234] Setting addon default-storageclass=true in "addons-237678"
	I1213 19:02:54.689349   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.689847   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	W1213 19:02:54.723560   24117 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 19:02:54.726534   24117 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1213 19:02:54.726747   24117 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:02:54.726794   24117 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1213 19:02:54.728115   24117 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1213 19:02:54.728276   24117 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1213 19:02:54.728394   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.729426   24117 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 19:02:54.729444   24117 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 19:02:54.729495   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.730505   24117 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1213 19:02:54.731965   24117 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:02:54.734263   24117 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 19:02:54.734283   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 19:02:54.734403   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.734594   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 19:02:54.735958   24117 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1213 19:02:54.737492   24117 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1213 19:02:54.737512   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 19:02:54.737566   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.737704   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 19:02:54.738379   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.740034   24117 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1213 19:02:54.740096   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 19:02:54.741732   24117 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 19:02:54.741745   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 19:02:54.741798   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.743477   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 19:02:54.744737   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 19:02:54.746011   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 19:02:54.747172   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 19:02:54.747228   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.748321   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 19:02:54.749561   24117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 19:02:54.749580   24117 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 19:02:54.749577   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.749641   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.749888   24117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 19:02:54.751372   24117 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 19:02:54.751394   24117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 19:02:54.751459   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.761103   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.777231   24117 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 19:02:54.777262   24117 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 19:02:54.777320   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:02:54.784555   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.785112   24117 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-237678"
	I1213 19:02:54.785156   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.785360   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.785631   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:02:54.786636   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.791827   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.793202   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.799473   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:02:54.799743   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.803815   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.804301   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.804311   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.813079   24117 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 19:02:54.814387   24117 out.go:177]   - Using image docker.io/busybox:stable
	I1213 19:02:54.815849   24117 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 19:02:54.815870   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 19:02:54.815927   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	W1213 19:02:54.827607   24117 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 19:02:54.827641   24117 retry.go:31] will retry after 304.050863ms: ssh: handshake failed: EOF
	I1213 19:02:54.828188   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 19:02:54.849242   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:02:54.911169   24117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:02:55.124459   24117 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 19:02:55.124489   24117 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 19:02:55.129036   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 19:02:55.220877   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 19:02:55.232565   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 19:02:55.313229   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 19:02:55.408044   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 19:02:55.412248   24117 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 19:02:55.412273   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 19:02:55.413983   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 19:02:55.416660   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 19:02:55.418208   24117 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 19:02:55.418229   24117 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 19:02:55.418441   24117 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 19:02:55.418460   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 19:02:55.432076   24117 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 19:02:55.432104   24117 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 19:02:55.508734   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 19:02:55.509865   24117 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 19:02:55.509887   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1213 19:02:55.620174   24117 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 19:02:55.620281   24117 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 19:02:55.709414   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 19:02:55.710261   24117 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 19:02:55.710322   24117 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 19:02:55.714824   24117 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 19:02:55.714906   24117 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 19:02:55.717957   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 19:02:55.730468   24117 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 19:02:55.730561   24117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 19:02:55.933458   24117 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 19:02:55.933487   24117 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 19:02:56.121572   24117 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 19:02:56.121649   24117 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 19:02:56.208878   24117 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 19:02:56.208975   24117 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 19:02:56.313233   24117 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 19:02:56.313310   24117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 19:02:56.420740   24117 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.592523299s)
	I1213 19:02:56.420862   24117 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 19:02:56.420839   24117 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.509644119s)
	I1213 19:02:56.422243   24117 node_ready.go:35] waiting up to 6m0s for node "addons-237678" to be "Ready" ...
	I1213 19:02:56.423778   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 19:02:56.424211   24117 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 19:02:56.424230   24117 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 19:02:56.431880   24117 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 19:02:56.431920   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 19:02:56.615396   24117 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 19:02:56.615495   24117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 19:02:56.624224   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.495149859s)
	I1213 19:02:56.624356   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.403451703s)
	I1213 19:02:56.711425   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 19:02:56.719687   24117 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:02:56.719714   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 19:02:56.922938   24117 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 19:02:56.922961   24117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 19:02:57.111034   24117 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-237678" context rescaled to 1 replicas
	I1213 19:02:57.212048   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:02:57.229424   24117 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 19:02:57.229514   24117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 19:02:57.620725   24117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 19:02:57.620760   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 19:02:57.826700   24117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 19:02:57.826781   24117 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 19:02:58.120264   24117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 19:02:58.120296   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 19:02:58.228486   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.995881659s)
	I1213 19:02:58.326059   24117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 19:02:58.326142   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 19:02:58.515683   24117 node_ready.go:53] node "addons-237678" has status "Ready":"False"
	I1213 19:02:58.529766   24117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 19:02:58.529848   24117 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 19:02:58.715432   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 19:02:58.814760   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.501445103s)
	I1213 19:02:58.814841   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.406676286s)
	I1213 19:03:00.617426   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.203408375s)
	I1213 19:03:00.617468   24117 addons.go:475] Verifying addon ingress=true in "addons-237678"
	I1213 19:03:00.617690   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.201003303s)
	I1213 19:03:00.617797   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.10898039s)
	I1213 19:03:00.617873   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.908372769s)
	I1213 19:03:00.617890   24117 addons.go:475] Verifying addon registry=true in "addons-237678"
	I1213 19:03:00.618066   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.900047215s)
	I1213 19:03:00.618173   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.194370756s)
	I1213 19:03:00.618188   24117 addons.go:475] Verifying addon metrics-server=true in "addons-237678"
	I1213 19:03:00.618233   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.906724049s)
	I1213 19:03:00.619624   24117 out.go:177] * Verifying ingress addon...
	I1213 19:03:00.619634   24117 out.go:177] * Verifying registry addon...
	I1213 19:03:00.622051   24117 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-237678 service yakd-dashboard -n yakd-dashboard
	
	I1213 19:03:00.623957   24117 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 19:03:00.624575   24117 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 19:03:00.628536   24117 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 19:03:00.628584   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:00.628831   24117 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 19:03:00.628853   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:00.925968   24117 node_ready.go:53] node "addons-237678" has status "Ready":"False"
	I1213 19:03:01.131665   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:01.131959   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:01.236935   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.024770164s)
	W1213 19:03:01.236988   24117 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 19:03:01.237010   24117 retry.go:31] will retry after 231.591018ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 19:03:01.469055   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:03:01.630252   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:01.630770   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:02.012738   24117 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 19:03:02.012812   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:03:02.033165   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:03:02.132656   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:02.133025   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:02.238169   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.522669523s)
	I1213 19:03:02.238261   24117 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-237678"
	I1213 19:03:02.240648   24117 out.go:177] * Verifying csi-hostpath-driver addon...
	I1213 19:03:02.242831   24117 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 19:03:02.309660   24117 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 19:03:02.309689   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:02.325958   24117 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 19:03:02.343109   24117 addons.go:234] Setting addon gcp-auth=true in "addons-237678"
	I1213 19:03:02.343170   24117 host.go:66] Checking if "addons-237678" exists ...
	I1213 19:03:02.343565   24117 cli_runner.go:164] Run: docker container inspect addons-237678 --format={{.State.Status}}
	I1213 19:03:02.361491   24117 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 19:03:02.361549   24117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-237678
	I1213 19:03:02.382361   24117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/addons-237678/id_rsa Username:docker}
	I1213 19:03:02.627438   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:02.628138   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:02.746109   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:03.127015   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:03.127524   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:03.245416   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:03.426094   24117 node_ready.go:53] node "addons-237678" has status "Ready":"False"
	I1213 19:03:03.627955   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:03.628368   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:03.746645   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:04.127761   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:04.128244   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:04.246313   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:04.539913   24117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.070789778s)
	I1213 19:03:04.539982   24117 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.17845697s)
	I1213 19:03:04.541859   24117 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:03:04.543383   24117 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 19:03:04.544761   24117 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 19:03:04.544775   24117 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 19:03:04.561871   24117 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 19:03:04.561898   24117 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 19:03:04.577613   24117 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 19:03:04.577635   24117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 19:03:04.593123   24117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 19:03:04.627683   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:04.628091   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:04.748696   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:04.921725   24117 addons.go:475] Verifying addon gcp-auth=true in "addons-237678"
	I1213 19:03:04.923146   24117 out.go:177] * Verifying gcp-auth addon...
	I1213 19:03:04.925822   24117 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 19:03:04.927872   24117 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 19:03:04.927896   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:05.127391   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:05.127731   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:05.245946   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:05.428411   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:05.627054   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:05.627536   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:05.745593   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:05.924842   24117 node_ready.go:53] node "addons-237678" has status "Ready":"False"
	I1213 19:03:05.928998   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:06.127582   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:06.128033   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:06.246115   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:06.428378   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:06.627006   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:06.627627   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:06.745717   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:06.928960   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:07.127683   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:07.128177   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:07.246190   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:07.429001   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:07.628199   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:07.628461   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:07.745730   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:07.925037   24117 node_ready.go:53] node "addons-237678" has status "Ready":"False"
	I1213 19:03:07.929411   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:08.127030   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:08.127704   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:08.246435   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:08.428483   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:08.626873   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:08.627356   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:08.746545   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:08.929147   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:09.127761   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:09.128461   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:09.246257   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:09.428302   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:09.627765   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:09.628099   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:09.746290   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:09.925792   24117 node_ready.go:53] node "addons-237678" has status "Ready":"False"
	I1213 19:03:09.928590   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:10.127147   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:10.127585   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:10.245731   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:10.427928   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:10.627587   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:10.627853   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:10.746120   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:10.928272   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:11.127133   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:11.127539   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:11.245768   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:11.429053   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:11.627537   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:11.627982   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:11.746409   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:11.925898   24117 node_ready.go:53] node "addons-237678" has status "Ready":"False"
	I1213 19:03:11.928000   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:12.127752   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:12.128065   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:12.246120   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:12.428616   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:12.627151   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:12.627706   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:12.745518   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:12.929105   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:13.127025   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:13.127204   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:13.245659   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:13.430413   24117 node_ready.go:49] node "addons-237678" has status "Ready":"True"
	I1213 19:03:13.430441   24117 node_ready.go:38] duration metric: took 17.008124974s for node "addons-237678" to be "Ready" ...
	I1213 19:03:13.430452   24117 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 19:03:13.431334   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:13.515259   24117 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:13.711001   24117 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 19:03:13.711031   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:13.711600   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:13.748405   24117 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 19:03:13.748438   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:13.932613   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:14.131589   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:14.231597   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:14.331889   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:14.431526   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:14.627937   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:14.628234   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:14.747718   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:14.928607   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:15.128639   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:15.129392   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:15.247572   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:15.429467   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:15.521456   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:15.627199   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:15.627811   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:15.746824   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:15.929078   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:16.128027   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:16.128050   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:16.247835   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:16.429803   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:16.627504   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:16.627907   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:16.747561   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:16.929739   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:17.127355   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:17.127680   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:17.247644   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:17.429868   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:17.628097   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:17.628668   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:17.747865   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:17.929278   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:18.021302   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:18.127929   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:18.127938   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:18.248071   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:18.429081   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:18.628524   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:18.628904   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:18.746879   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:18.929562   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:19.127399   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:19.127717   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:19.246859   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:19.428746   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:19.627859   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:19.628331   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:19.747307   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:19.929888   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:20.129189   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:20.129590   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:20.246517   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:20.429012   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:20.520557   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:20.627561   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:20.627893   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:20.746703   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:20.928877   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:21.127651   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:21.127678   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:21.247492   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:21.430457   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:21.628505   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:21.628712   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:21.747022   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:21.929698   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:22.127995   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:22.128557   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:22.246696   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:22.429154   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:22.627558   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:22.628084   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:22.747121   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:22.929324   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:23.021489   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:23.127840   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:23.128048   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:23.247213   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:23.429286   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:23.628249   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:23.628390   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:23.747829   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:23.928384   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:24.127715   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:24.128075   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:24.247758   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:24.429773   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:24.627786   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:24.628161   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:24.746979   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:24.929818   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:25.127811   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:25.128158   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:25.247213   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:25.429471   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:25.520873   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:25.629560   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:25.629867   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:25.747183   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:25.929360   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:26.128043   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:26.128353   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:26.247711   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:26.429461   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:26.628064   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:26.628295   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:26.747086   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:26.945189   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:27.127574   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:27.127713   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:27.246403   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:27.429390   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:27.627551   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:27.627835   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:27.746938   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:27.928887   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:28.020230   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:28.128915   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:28.129355   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:28.247026   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:28.429414   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:28.627685   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:28.628162   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:28.746693   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:28.928642   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:29.127509   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:29.127832   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:29.246702   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:29.428564   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:29.628349   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:29.628576   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:29.747565   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:29.928764   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:30.021299   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:30.127963   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:30.128044   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:30.247641   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:30.430417   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:30.627651   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:30.628033   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:30.747556   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:30.929251   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:31.128515   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:31.129958   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:31.312180   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:31.429434   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:31.631201   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:31.632828   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:31.813385   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:31.930189   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:32.032540   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:32.137529   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:32.138676   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:32.311715   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:32.429274   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:32.629583   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:32.630280   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:32.748625   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:32.928904   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:33.128046   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:33.128460   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:33.247814   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:33.429161   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:33.628319   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:33.628504   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:33.747896   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:33.929417   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:34.127642   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:34.127941   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:34.246806   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:34.429029   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:34.521491   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:34.628116   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:34.628415   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:34.747859   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:34.929002   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:35.128024   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:35.128294   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:35.247791   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:35.429382   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:35.627823   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:35.628025   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:35.748944   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:35.928862   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:36.127708   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:36.128270   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:36.247495   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:36.428903   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:36.522368   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:36.627990   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:36.628089   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:36.747892   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:36.929566   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:37.128221   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:37.128823   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:37.246873   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:37.428672   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:37.627728   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:37.628001   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:37.746861   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:37.929112   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:38.128479   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:38.128911   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:38.247050   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:38.429151   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:38.627720   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:38.628056   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:38.746845   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:38.928878   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:39.020073   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:39.128009   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:39.128379   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:39.247664   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:39.429548   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:39.627731   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:39.627856   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:39.746980   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:39.929387   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:40.127830   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:40.128125   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:40.247145   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:40.429023   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:40.628263   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:40.628460   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:40.747311   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:40.929500   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:41.020925   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:41.128074   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:41.128342   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:41.247607   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:41.428804   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:41.628236   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:41.628614   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:41.747619   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:41.929092   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:42.127980   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:42.128130   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:42.247169   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:42.428982   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:42.630184   24117 kapi.go:107] duration metric: took 42.005607832s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 19:03:42.630265   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:42.747898   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:42.929700   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:43.127876   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:43.246698   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:43.428904   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:43.520351   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:43.628125   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:43.746988   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:43.929556   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:44.128088   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:44.246956   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:44.428749   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:44.628318   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:44.746530   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:44.929078   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:45.127840   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:45.246863   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:45.428948   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:45.521050   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:45.628716   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:45.747783   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:45.928977   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:46.127476   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:46.247732   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:46.429555   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:46.627815   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:46.747530   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:46.929069   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:47.128886   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:47.246479   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:47.430708   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:47.627668   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:47.746772   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:47.929434   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:48.021002   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:48.128235   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:48.247311   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:48.430572   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:48.628666   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:48.747742   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:48.928800   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:49.128174   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:49.246689   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:49.429833   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:49.628244   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:49.746827   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:49.929281   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:50.021143   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:50.128503   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:50.248343   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:50.428980   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:50.628323   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:50.748221   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:50.928944   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:51.128125   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:51.247085   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:51.428598   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:51.627680   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:51.746834   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:51.929632   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:52.127382   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:52.247224   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:52.429161   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:52.520997   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:52.628728   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:52.746648   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:52.928620   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:53.127630   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:53.246443   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:53.429524   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:53.628282   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:53.811612   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:53.928691   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:54.128269   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:54.247244   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:54.429541   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:54.521228   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:54.629055   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:54.747241   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:54.929917   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:55.127342   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:55.247916   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:55.430204   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:55.628160   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:55.747698   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:55.929939   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:56.127678   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:56.247063   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:56.428975   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:56.521415   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:56.628273   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:56.748052   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:56.929926   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:57.128873   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:57.248422   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:57.429328   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:57.628385   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:57.748787   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:57.928852   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:58.128116   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:58.246565   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:58.429768   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:58.628348   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:58.747373   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:58.930009   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:59.020693   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:59.128400   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:59.247361   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:59.429076   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:59.627469   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:59.747264   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:59.929548   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:00.127778   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:00.248030   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:00.429735   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:00.628677   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:00.748617   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:00.929619   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:01.022151   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:01.129003   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:01.248050   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:01.429693   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:01.627759   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:01.747471   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:01.929128   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:02.127997   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:02.246806   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:02.428812   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:02.629424   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:02.746958   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:02.928943   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:03.127837   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:03.246340   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:03.429590   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:03.521081   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:03.629119   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:03.747864   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:03.928721   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:04.127964   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:04.246904   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:04.429074   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:04.628191   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:04.747444   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:04.928646   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:05.127193   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:05.247122   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:05.429084   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:05.628033   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:05.747425   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:05.929696   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:06.021325   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:06.127505   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:06.247854   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:06.428548   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:06.628046   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:06.746956   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:06.929343   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:07.127225   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:07.247542   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:07.429290   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:07.628442   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:07.747242   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:07.929603   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:08.127493   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:08.247462   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:08.429641   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:08.520095   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:08.694434   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:08.795495   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:08.929743   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:09.154569   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:09.247149   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:09.428953   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:09.628707   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:09.748198   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:09.929598   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:10.128280   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:10.247191   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:10.428945   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:10.520407   24117 pod_ready.go:103] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:10.627840   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:10.746805   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:10.928704   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:11.020172   24117 pod_ready.go:93] pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:11.020192   24117 pod_ready.go:82] duration metric: took 57.504892803s for pod "amd-gpu-device-plugin-bl7z9" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.020202   24117 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vdvvc" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.024180   24117 pod_ready.go:93] pod "coredns-7c65d6cfc9-vdvvc" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:11.024199   24117 pod_ready.go:82] duration metric: took 3.990866ms for pod "coredns-7c65d6cfc9-vdvvc" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.024214   24117 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.028953   24117 pod_ready.go:93] pod "etcd-addons-237678" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:11.028988   24117 pod_ready.go:82] duration metric: took 4.768115ms for pod "etcd-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.029001   24117 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.032950   24117 pod_ready.go:93] pod "kube-apiserver-addons-237678" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:11.032967   24117 pod_ready.go:82] duration metric: took 3.959136ms for pod "kube-apiserver-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.032975   24117 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.036815   24117 pod_ready.go:93] pod "kube-controller-manager-addons-237678" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:11.036832   24117 pod_ready.go:82] duration metric: took 3.85051ms for pod "kube-controller-manager-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.036846   24117 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8xhqt" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.128335   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:11.247662   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:11.418368   24117 pod_ready.go:93] pod "kube-proxy-8xhqt" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:11.418388   24117 pod_ready.go:82] duration metric: took 381.535082ms for pod "kube-proxy-8xhqt" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.418398   24117 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.428857   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:11.628117   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:11.746910   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:11.819217   24117 pod_ready.go:93] pod "kube-scheduler-addons-237678" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:11.819244   24117 pod_ready.go:82] duration metric: took 400.838452ms for pod "kube-scheduler-addons-237678" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.819258   24117 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:11.928871   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:12.128084   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:12.247594   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:12.429097   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:12.627673   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:12.748387   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:12.929711   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:13.128100   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:13.247620   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:13.429563   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:13.628120   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:13.746945   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:13.825861   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:13.931204   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:14.128724   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:14.246740   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:14.429461   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:14.628331   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:14.747206   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:14.929242   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:15.128252   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:15.248689   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:15.428728   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:15.627719   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:15.747111   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:15.929056   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:16.128632   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:16.247450   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:16.324923   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:16.429655   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:16.628275   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:16.748338   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:16.929432   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:17.127967   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:17.247020   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:17.429989   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:17.629141   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:17.747103   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:17.929783   24117 kapi.go:107] duration metric: took 1m13.003957087s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 19:04:17.932040   24117 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-237678 cluster.
	I1213 19:04:17.933660   24117 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 19:04:17.935105   24117 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 19:04:18.129463   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:18.312289   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:18.325382   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:18.628918   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:18.810867   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:19.128680   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:19.312530   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:19.628175   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:19.747698   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:20.128002   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:20.246959   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:20.628331   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:20.748888   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:20.825081   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:21.128812   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:21.247802   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:21.628297   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:21.747331   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:22.127798   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:22.247056   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:22.628833   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:22.747759   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:23.128551   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:23.246967   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:23.325975   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:23.628263   24117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:23.748333   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:24.128995   24117 kapi.go:107] duration metric: took 1m23.505038003s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 19:04:24.246532   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:24.747788   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:25.247023   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:25.747855   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:25.825214   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:26.247642   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:26.746539   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:27.248313   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:27.746622   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:28.247247   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:28.324959   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:28.747521   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:29.247485   24117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:29.746895   24117 kapi.go:107] duration metric: took 1m27.504065503s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 19:04:29.748630   24117 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, ingress-dns, storage-provisioner, default-storageclass, cloud-spanner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1213 19:04:29.749754   24117 addons.go:510] duration metric: took 1m35.124044409s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin ingress-dns storage-provisioner default-storageclass cloud-spanner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1213 19:04:30.824491   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:33.326340   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:35.825043   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:38.324708   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:40.325107   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:42.824137   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:44.824643   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:46.896863   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:49.325012   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:51.825193   24117 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"False"
	I1213 19:04:54.325049   24117 pod_ready.go:93] pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:54.325071   24117 pod_ready.go:82] duration metric: took 42.505804813s for pod "metrics-server-84c5f94fbc-p2h9p" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:54.325082   24117 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5ppp7" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:54.329474   24117 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-5ppp7" in "kube-system" namespace has status "Ready":"True"
	I1213 19:04:54.329494   24117 pod_ready.go:82] duration metric: took 4.404442ms for pod "nvidia-device-plugin-daemonset-5ppp7" in "kube-system" namespace to be "Ready" ...
	I1213 19:04:54.329510   24117 pod_ready.go:39] duration metric: took 1m40.899045115s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 19:04:54.329527   24117 api_server.go:52] waiting for apiserver process to appear ...
	I1213 19:04:54.329557   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:04:54.329608   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:04:54.362338   24117 cri.go:89] found id: "768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323"
	I1213 19:04:54.362362   24117 cri.go:89] found id: ""
	I1213 19:04:54.362370   24117 logs.go:282] 1 containers: [768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323]
	I1213 19:04:54.362423   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:54.365703   24117 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:04:54.365772   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:04:54.399240   24117 cri.go:89] found id: "2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06"
	I1213 19:04:54.399267   24117 cri.go:89] found id: ""
	I1213 19:04:54.399297   24117 logs.go:282] 1 containers: [2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06]
	I1213 19:04:54.399352   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:54.402738   24117 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:04:54.402794   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:04:54.437000   24117 cri.go:89] found id: "c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c"
	I1213 19:04:54.437028   24117 cri.go:89] found id: ""
	I1213 19:04:54.437038   24117 logs.go:282] 1 containers: [c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c]
	I1213 19:04:54.437080   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:54.440562   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:04:54.440619   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:04:54.473542   24117 cri.go:89] found id: "b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d"
	I1213 19:04:54.473568   24117 cri.go:89] found id: ""
	I1213 19:04:54.473586   24117 logs.go:282] 1 containers: [b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d]
	I1213 19:04:54.473643   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:54.476994   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:04:54.477050   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:04:54.510230   24117 cri.go:89] found id: "9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65"
	I1213 19:04:54.510256   24117 cri.go:89] found id: ""
	I1213 19:04:54.510264   24117 logs.go:282] 1 containers: [9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65]
	I1213 19:04:54.510321   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:54.513487   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:04:54.513557   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:04:54.546681   24117 cri.go:89] found id: "96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986"
	I1213 19:04:54.546702   24117 cri.go:89] found id: ""
	I1213 19:04:54.546709   24117 logs.go:282] 1 containers: [96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986]
	I1213 19:04:54.546764   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:54.550149   24117 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:04:54.550198   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:04:54.582976   24117 cri.go:89] found id: "d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5"
	I1213 19:04:54.583003   24117 cri.go:89] found id: ""
	I1213 19:04:54.583017   24117 logs.go:282] 1 containers: [d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5]
	I1213 19:04:54.583059   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:54.586398   24117 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:04:54.586426   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:04:54.657463   24117 logs.go:123] Gathering logs for kubelet ...
	I1213 19:04:54.657497   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:04:54.737754   24117 logs.go:123] Gathering logs for kube-apiserver [768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323] ...
	I1213 19:04:54.737789   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323"
	I1213 19:04:54.781402   24117 logs.go:123] Gathering logs for etcd [2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06] ...
	I1213 19:04:54.781435   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06"
	I1213 19:04:54.827165   24117 logs.go:123] Gathering logs for kube-scheduler [b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d] ...
	I1213 19:04:54.827203   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d"
	I1213 19:04:54.865320   24117 logs.go:123] Gathering logs for kube-controller-manager [96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986] ...
	I1213 19:04:54.865348   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986"
	I1213 19:04:54.919064   24117 logs.go:123] Gathering logs for container status ...
	I1213 19:04:54.919105   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:04:54.961693   24117 logs.go:123] Gathering logs for dmesg ...
	I1213 19:04:54.961722   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:04:54.973761   24117 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:04:54.973790   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 19:04:55.070984   24117 logs.go:123] Gathering logs for coredns [c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c] ...
	I1213 19:04:55.071020   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c"
	I1213 19:04:55.123537   24117 logs.go:123] Gathering logs for kube-proxy [9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65] ...
	I1213 19:04:55.123579   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65"
	I1213 19:04:55.158283   24117 logs.go:123] Gathering logs for kindnet [d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5] ...
	I1213 19:04:55.158307   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5"
	I1213 19:04:57.691171   24117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:04:57.705266   24117 api_server.go:72] duration metric: took 2m3.07957193s to wait for apiserver process to appear ...
	I1213 19:04:57.705292   24117 api_server.go:88] waiting for apiserver healthz status ...
	I1213 19:04:57.705351   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:04:57.705406   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:04:57.738998   24117 cri.go:89] found id: "768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323"
	I1213 19:04:57.739019   24117 cri.go:89] found id: ""
	I1213 19:04:57.739027   24117 logs.go:282] 1 containers: [768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323]
	I1213 19:04:57.739074   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:57.742424   24117 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:04:57.742494   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:04:57.783804   24117 cri.go:89] found id: "2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06"
	I1213 19:04:57.783830   24117 cri.go:89] found id: ""
	I1213 19:04:57.783839   24117 logs.go:282] 1 containers: [2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06]
	I1213 19:04:57.783894   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:57.808003   24117 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:04:57.808080   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:04:57.843793   24117 cri.go:89] found id: "c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c"
	I1213 19:04:57.843818   24117 cri.go:89] found id: ""
	I1213 19:04:57.843827   24117 logs.go:282] 1 containers: [c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c]
	I1213 19:04:57.843867   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:57.847190   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:04:57.847246   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:04:57.881332   24117 cri.go:89] found id: "b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d"
	I1213 19:04:57.881362   24117 cri.go:89] found id: ""
	I1213 19:04:57.881372   24117 logs.go:282] 1 containers: [b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d]
	I1213 19:04:57.881418   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:57.885381   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:04:57.885448   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:04:57.921094   24117 cri.go:89] found id: "9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65"
	I1213 19:04:57.921120   24117 cri.go:89] found id: ""
	I1213 19:04:57.921130   24117 logs.go:282] 1 containers: [9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65]
	I1213 19:04:57.921183   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:57.924692   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:04:57.924760   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:04:57.956925   24117 cri.go:89] found id: "96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986"
	I1213 19:04:57.956949   24117 cri.go:89] found id: ""
	I1213 19:04:57.956956   24117 logs.go:282] 1 containers: [96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986]
	I1213 19:04:57.957004   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:57.960227   24117 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:04:57.960276   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:04:57.993260   24117 cri.go:89] found id: "d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5"
	I1213 19:04:57.993281   24117 cri.go:89] found id: ""
	I1213 19:04:57.993288   24117 logs.go:282] 1 containers: [d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5]
	I1213 19:04:57.993333   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:04:57.996560   24117 logs.go:123] Gathering logs for kubelet ...
	I1213 19:04:57.996581   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:04:58.084333   24117 logs.go:123] Gathering logs for kube-apiserver [768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323] ...
	I1213 19:04:58.084371   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323"
	I1213 19:04:58.130474   24117 logs.go:123] Gathering logs for etcd [2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06] ...
	I1213 19:04:58.130505   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06"
	I1213 19:04:58.176247   24117 logs.go:123] Gathering logs for kube-proxy [9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65] ...
	I1213 19:04:58.176281   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65"
	I1213 19:04:58.209200   24117 logs.go:123] Gathering logs for kindnet [d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5] ...
	I1213 19:04:58.209238   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5"
	I1213 19:04:58.242860   24117 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:04:58.242886   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:04:58.322594   24117 logs.go:123] Gathering logs for dmesg ...
	I1213 19:04:58.322631   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:04:58.334843   24117 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:04:58.334874   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 19:04:58.435698   24117 logs.go:123] Gathering logs for coredns [c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c] ...
	I1213 19:04:58.435724   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c"
	I1213 19:04:58.488174   24117 logs.go:123] Gathering logs for kube-scheduler [b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d] ...
	I1213 19:04:58.488206   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d"
	I1213 19:04:58.527511   24117 logs.go:123] Gathering logs for kube-controller-manager [96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986] ...
	I1213 19:04:58.527540   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986"
	I1213 19:04:58.581254   24117 logs.go:123] Gathering logs for container status ...
	I1213 19:04:58.581290   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:05:01.123166   24117 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 19:05:01.126725   24117 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 19:05:01.127678   24117 api_server.go:141] control plane version: v1.31.2
	I1213 19:05:01.127700   24117 api_server.go:131] duration metric: took 3.422401118s to wait for apiserver health ...
	I1213 19:05:01.127708   24117 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 19:05:01.127727   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:05:01.127777   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:05:01.161081   24117 cri.go:89] found id: "768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323"
	I1213 19:05:01.161100   24117 cri.go:89] found id: ""
	I1213 19:05:01.161107   24117 logs.go:282] 1 containers: [768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323]
	I1213 19:05:01.161146   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:05:01.164604   24117 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:05:01.164676   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:05:01.198691   24117 cri.go:89] found id: "2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06"
	I1213 19:05:01.198715   24117 cri.go:89] found id: ""
	I1213 19:05:01.198722   24117 logs.go:282] 1 containers: [2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06]
	I1213 19:05:01.198764   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:05:01.201972   24117 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:05:01.202041   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:05:01.236153   24117 cri.go:89] found id: "c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c"
	I1213 19:05:01.236175   24117 cri.go:89] found id: ""
	I1213 19:05:01.236183   24117 logs.go:282] 1 containers: [c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c]
	I1213 19:05:01.236237   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:05:01.240173   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:05:01.240246   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:05:01.273916   24117 cri.go:89] found id: "b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d"
	I1213 19:05:01.273939   24117 cri.go:89] found id: ""
	I1213 19:05:01.273946   24117 logs.go:282] 1 containers: [b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d]
	I1213 19:05:01.274001   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:05:01.277359   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:05:01.277416   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:05:01.309576   24117 cri.go:89] found id: "9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65"
	I1213 19:05:01.309602   24117 cri.go:89] found id: ""
	I1213 19:05:01.309610   24117 logs.go:282] 1 containers: [9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65]
	I1213 19:05:01.309652   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:05:01.312874   24117 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:05:01.312938   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:05:01.345780   24117 cri.go:89] found id: "96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986"
	I1213 19:05:01.345798   24117 cri.go:89] found id: ""
	I1213 19:05:01.345806   24117 logs.go:282] 1 containers: [96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986]
	I1213 19:05:01.345845   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:05:01.349017   24117 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:05:01.349089   24117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:05:01.384522   24117 cri.go:89] found id: "d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5"
	I1213 19:05:01.384543   24117 cri.go:89] found id: ""
	I1213 19:05:01.384551   24117 logs.go:282] 1 containers: [d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5]
	I1213 19:05:01.384591   24117 ssh_runner.go:195] Run: which crictl
	I1213 19:05:01.387790   24117 logs.go:123] Gathering logs for kube-apiserver [768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323] ...
	I1213 19:05:01.387816   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323"
	I1213 19:05:01.433166   24117 logs.go:123] Gathering logs for etcd [2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06] ...
	I1213 19:05:01.433196   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06"
	I1213 19:05:01.477176   24117 logs.go:123] Gathering logs for kube-scheduler [b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d] ...
	I1213 19:05:01.477208   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d"
	I1213 19:05:01.515750   24117 logs.go:123] Gathering logs for kube-proxy [9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65] ...
	I1213 19:05:01.515780   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65"
	I1213 19:05:01.547760   24117 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:05:01.547785   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:05:01.623891   24117 logs.go:123] Gathering logs for container status ...
	I1213 19:05:01.623930   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:05:01.664416   24117 logs.go:123] Gathering logs for dmesg ...
	I1213 19:05:01.664455   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:05:01.677012   24117 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:05:01.677041   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 19:05:01.773315   24117 logs.go:123] Gathering logs for kube-controller-manager [96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986] ...
	I1213 19:05:01.773347   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986"
	I1213 19:05:01.829244   24117 logs.go:123] Gathering logs for kindnet [d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5] ...
	I1213 19:05:01.829284   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5"
	I1213 19:05:01.863117   24117 logs.go:123] Gathering logs for kubelet ...
	I1213 19:05:01.863153   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:05:01.946639   24117 logs.go:123] Gathering logs for coredns [c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c] ...
	I1213 19:05:01.946676   24117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c"
	I1213 19:05:04.510780   24117 system_pods.go:59] 19 kube-system pods found
	I1213 19:05:04.510828   24117 system_pods.go:61] "amd-gpu-device-plugin-bl7z9" [53b1759f-8dcc-4454-ba3e-6feaf74540e7] Running
	I1213 19:05:04.510835   24117 system_pods.go:61] "coredns-7c65d6cfc9-vdvvc" [e7ae489a-7c45-40fb-8676-05e0be28bead] Running
	I1213 19:05:04.510839   24117 system_pods.go:61] "csi-hostpath-attacher-0" [68f49318-ecc3-4639-960c-0e788a457273] Running
	I1213 19:05:04.510850   24117 system_pods.go:61] "csi-hostpath-resizer-0" [356b4293-7940-44f3-ac81-f9413d5cbf9b] Running
	I1213 19:05:04.510854   24117 system_pods.go:61] "csi-hostpathplugin-97tn6" [eea99428-236d-4e3e-bf78-139bc53a1565] Running
	I1213 19:05:04.510857   24117 system_pods.go:61] "etcd-addons-237678" [5a4f15e1-e00d-47a0-b1dd-b0905caf5d03] Running
	I1213 19:05:04.510861   24117 system_pods.go:61] "kindnet-f9dml" [74b975ef-1918-49e4-a81a-550827609fc1] Running
	I1213 19:05:04.510864   24117 system_pods.go:61] "kube-apiserver-addons-237678" [0ae41178-7528-4943-900c-27b5b826c8cd] Running
	I1213 19:05:04.510868   24117 system_pods.go:61] "kube-controller-manager-addons-237678" [77273d82-9ac6-463f-8899-6f7c685eea58] Running
	I1213 19:05:04.510871   24117 system_pods.go:61] "kube-ingress-dns-minikube" [e759fa09-c5fa-4e06-8839-edc1e904b62e] Running
	I1213 19:05:04.510874   24117 system_pods.go:61] "kube-proxy-8xhqt" [55f3abc6-9664-46cf-9750-c30ed47c57f0] Running
	I1213 19:05:04.510877   24117 system_pods.go:61] "kube-scheduler-addons-237678" [5711179f-7df5-4e84-9b46-fad638dea898] Running
	I1213 19:05:04.510880   24117 system_pods.go:61] "metrics-server-84c5f94fbc-p2h9p" [d3e6cf22-81c6-4dd9-8a14-2e6cb15543f0] Running
	I1213 19:05:04.510885   24117 system_pods.go:61] "nvidia-device-plugin-daemonset-5ppp7" [c9d2d640-a841-4988-aaab-2a74cbfe5596] Running
	I1213 19:05:04.510888   24117 system_pods.go:61] "registry-5cc95cd69-sgzjd" [dc9a854b-15a2-47cc-b4c8-0f7c608e5335] Running
	I1213 19:05:04.510891   24117 system_pods.go:61] "registry-proxy-nnht8" [c1db19b5-cb0e-4cec-b6fb-69ed544cf362] Running
	I1213 19:05:04.510895   24117 system_pods.go:61] "snapshot-controller-56fcc65765-c4x78" [b09a009d-8270-47b0-92a1-1a15522bed87] Running
	I1213 19:05:04.510899   24117 system_pods.go:61] "snapshot-controller-56fcc65765-f2dhs" [88f04c09-91f5-447a-8cd2-08494d44cdb7] Running
	I1213 19:05:04.510905   24117 system_pods.go:61] "storage-provisioner" [1721d202-3c96-45c0-a0bb-8a5664f3274b] Running
	I1213 19:05:04.510910   24117 system_pods.go:74] duration metric: took 3.383196961s to wait for pod list to return data ...
	I1213 19:05:04.510919   24117 default_sa.go:34] waiting for default service account to be created ...
	I1213 19:05:04.513317   24117 default_sa.go:45] found service account: "default"
	I1213 19:05:04.513339   24117 default_sa.go:55] duration metric: took 2.414259ms for default service account to be created ...
	I1213 19:05:04.513346   24117 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 19:05:04.521678   24117 system_pods.go:86] 19 kube-system pods found
	I1213 19:05:04.521707   24117 system_pods.go:89] "amd-gpu-device-plugin-bl7z9" [53b1759f-8dcc-4454-ba3e-6feaf74540e7] Running
	I1213 19:05:04.521714   24117 system_pods.go:89] "coredns-7c65d6cfc9-vdvvc" [e7ae489a-7c45-40fb-8676-05e0be28bead] Running
	I1213 19:05:04.521718   24117 system_pods.go:89] "csi-hostpath-attacher-0" [68f49318-ecc3-4639-960c-0e788a457273] Running
	I1213 19:05:04.521721   24117 system_pods.go:89] "csi-hostpath-resizer-0" [356b4293-7940-44f3-ac81-f9413d5cbf9b] Running
	I1213 19:05:04.521725   24117 system_pods.go:89] "csi-hostpathplugin-97tn6" [eea99428-236d-4e3e-bf78-139bc53a1565] Running
	I1213 19:05:04.521729   24117 system_pods.go:89] "etcd-addons-237678" [5a4f15e1-e00d-47a0-b1dd-b0905caf5d03] Running
	I1213 19:05:04.521733   24117 system_pods.go:89] "kindnet-f9dml" [74b975ef-1918-49e4-a81a-550827609fc1] Running
	I1213 19:05:04.521737   24117 system_pods.go:89] "kube-apiserver-addons-237678" [0ae41178-7528-4943-900c-27b5b826c8cd] Running
	I1213 19:05:04.521741   24117 system_pods.go:89] "kube-controller-manager-addons-237678" [77273d82-9ac6-463f-8899-6f7c685eea58] Running
	I1213 19:05:04.521745   24117 system_pods.go:89] "kube-ingress-dns-minikube" [e759fa09-c5fa-4e06-8839-edc1e904b62e] Running
	I1213 19:05:04.521749   24117 system_pods.go:89] "kube-proxy-8xhqt" [55f3abc6-9664-46cf-9750-c30ed47c57f0] Running
	I1213 19:05:04.521754   24117 system_pods.go:89] "kube-scheduler-addons-237678" [5711179f-7df5-4e84-9b46-fad638dea898] Running
	I1213 19:05:04.521758   24117 system_pods.go:89] "metrics-server-84c5f94fbc-p2h9p" [d3e6cf22-81c6-4dd9-8a14-2e6cb15543f0] Running
	I1213 19:05:04.521764   24117 system_pods.go:89] "nvidia-device-plugin-daemonset-5ppp7" [c9d2d640-a841-4988-aaab-2a74cbfe5596] Running
	I1213 19:05:04.521771   24117 system_pods.go:89] "registry-5cc95cd69-sgzjd" [dc9a854b-15a2-47cc-b4c8-0f7c608e5335] Running
	I1213 19:05:04.521774   24117 system_pods.go:89] "registry-proxy-nnht8" [c1db19b5-cb0e-4cec-b6fb-69ed544cf362] Running
	I1213 19:05:04.521781   24117 system_pods.go:89] "snapshot-controller-56fcc65765-c4x78" [b09a009d-8270-47b0-92a1-1a15522bed87] Running
	I1213 19:05:04.521784   24117 system_pods.go:89] "snapshot-controller-56fcc65765-f2dhs" [88f04c09-91f5-447a-8cd2-08494d44cdb7] Running
	I1213 19:05:04.521787   24117 system_pods.go:89] "storage-provisioner" [1721d202-3c96-45c0-a0bb-8a5664f3274b] Running
	I1213 19:05:04.521794   24117 system_pods.go:126] duration metric: took 8.442049ms to wait for k8s-apps to be running ...
	I1213 19:05:04.521803   24117 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 19:05:04.521847   24117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:05:04.533231   24117 system_svc.go:56] duration metric: took 11.418309ms WaitForService to wait for kubelet
	I1213 19:05:04.533263   24117 kubeadm.go:582] duration metric: took 2m9.907572714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:05:04.533282   24117 node_conditions.go:102] verifying NodePressure condition ...
	I1213 19:05:04.536474   24117 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 19:05:04.536508   24117 node_conditions.go:123] node cpu capacity is 8
	I1213 19:05:04.536522   24117 node_conditions.go:105] duration metric: took 3.235126ms to run NodePressure ...
	I1213 19:05:04.536537   24117 start.go:241] waiting for startup goroutines ...
	I1213 19:05:04.536547   24117 start.go:246] waiting for cluster config update ...
	I1213 19:05:04.536573   24117 start.go:255] writing updated cluster config ...
	I1213 19:05:04.536900   24117 ssh_runner.go:195] Run: rm -f paused
	I1213 19:05:04.585451   24117 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 19:05:04.588070   24117 out.go:177] * Done! kubectl is now configured to use "addons-237678" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 19:08:08 addons-237678 crio[1041]: time="2024-12-13 19:08:08.083966535Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-5f85ff4588-vp9bh Namespace:ingress-nginx ID:57f2066db984639882b439d7545032c474e8edbe4c240abca1aec73198d7833f UID:38d05de2-6a53-4611-873d-9fc07db7e393 NetNS:/var/run/netns/01a7fdd2-6470-463e-8b6f-a681d5d15fa7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 13 19:08:08 addons-237678 crio[1041]: time="2024-12-13 19:08:08.084090242Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-5f85ff4588-vp9bh from CNI network \"kindnet\" (type=ptp)"
	Dec 13 19:08:08 addons-237678 crio[1041]: time="2024-12-13 19:08:08.128628806Z" level=info msg="Stopped pod sandbox: 57f2066db984639882b439d7545032c474e8edbe4c240abca1aec73198d7833f" id=f1580920-0dda-4095-85d0-9d121e35a5bf name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:08:08 addons-237678 crio[1041]: time="2024-12-13 19:08:08.237156922Z" level=info msg="Removing container: 709300fe7b4a29aa737847af11c00fa446dc4c84ada7295d5e2bb675735ae131" id=20dbbbef-f333-491d-ad90-e779d4147d55 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 19:08:08 addons-237678 crio[1041]: time="2024-12-13 19:08:08.249350435Z" level=info msg="Removed container 709300fe7b4a29aa737847af11c00fa446dc4c84ada7295d5e2bb675735ae131: ingress-nginx/ingress-nginx-controller-5f85ff4588-vp9bh/controller" id=20dbbbef-f333-491d-ad90-e779d4147d55 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.340583192Z" level=info msg="Removing container: 8204d33f325d140d678f494fb9700e7cf0376a780abe99209a4b56465afe0524" id=a8fbc501-925e-4f14-9d7e-3c98c7ff5c69 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.352969993Z" level=info msg="Removed container 8204d33f325d140d678f494fb9700e7cf0376a780abe99209a4b56465afe0524: ingress-nginx/ingress-nginx-admission-patch-zpjz5/patch" id=a8fbc501-925e-4f14-9d7e-3c98c7ff5c69 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.354308867Z" level=info msg="Removing container: 8fda3ac0427bdea0fd54495fa7c3a39e3fbf790b8dfe55d578880e148aedfe25" id=caec7da4-82bd-4339-8bf3-83d2449dd034 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.366600982Z" level=info msg="Removed container 8fda3ac0427bdea0fd54495fa7c3a39e3fbf790b8dfe55d578880e148aedfe25: ingress-nginx/ingress-nginx-admission-create-xhsqd/create" id=caec7da4-82bd-4339-8bf3-83d2449dd034 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.367666274Z" level=info msg="Stopping pod sandbox: ca2328f09e7d23aaa9e2b59f66c7f283de6b80fc60d0f8d4ab9388555b9a9e98" id=65697eed-5b79-46c4-a227-b0709d4ff6f8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.367693376Z" level=info msg="Stopped pod sandbox (already stopped): ca2328f09e7d23aaa9e2b59f66c7f283de6b80fc60d0f8d4ab9388555b9a9e98" id=65697eed-5b79-46c4-a227-b0709d4ff6f8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.367871505Z" level=info msg="Removing pod sandbox: ca2328f09e7d23aaa9e2b59f66c7f283de6b80fc60d0f8d4ab9388555b9a9e98" id=c3e8b9d8-2868-4cdd-8561-34364edbb7d0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.373582033Z" level=info msg="Removed pod sandbox: ca2328f09e7d23aaa9e2b59f66c7f283de6b80fc60d0f8d4ab9388555b9a9e98" id=c3e8b9d8-2868-4cdd-8561-34364edbb7d0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.373925379Z" level=info msg="Stopping pod sandbox: 4df4e8d8c5fb52f4829506f050edf94176636351109da153c7545a53c443e3e4" id=a5f3ccfb-ef47-4692-af84-e2d1b8897b1b name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.373966278Z" level=info msg="Stopped pod sandbox (already stopped): 4df4e8d8c5fb52f4829506f050edf94176636351109da153c7545a53c443e3e4" id=a5f3ccfb-ef47-4692-af84-e2d1b8897b1b name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.374258744Z" level=info msg="Removing pod sandbox: 4df4e8d8c5fb52f4829506f050edf94176636351109da153c7545a53c443e3e4" id=85a337c0-09c3-4b64-a3d4-9c2c3e6d590d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.380336072Z" level=info msg="Removed pod sandbox: 4df4e8d8c5fb52f4829506f050edf94176636351109da153c7545a53c443e3e4" id=85a337c0-09c3-4b64-a3d4-9c2c3e6d590d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.380796013Z" level=info msg="Stopping pod sandbox: f663d45601fcd0a4ae397417e3c01eb6119095ceabee24c04f915520af1f1594" id=ece60c2b-2f75-4f9f-91db-3f4bdbc69fa1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.380847338Z" level=info msg="Stopped pod sandbox (already stopped): f663d45601fcd0a4ae397417e3c01eb6119095ceabee24c04f915520af1f1594" id=ece60c2b-2f75-4f9f-91db-3f4bdbc69fa1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.381176474Z" level=info msg="Removing pod sandbox: f663d45601fcd0a4ae397417e3c01eb6119095ceabee24c04f915520af1f1594" id=f6beaf7d-6bb2-4c8d-9de8-f4889fdd9198 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.388618752Z" level=info msg="Removed pod sandbox: f663d45601fcd0a4ae397417e3c01eb6119095ceabee24c04f915520af1f1594" id=f6beaf7d-6bb2-4c8d-9de8-f4889fdd9198 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.389049227Z" level=info msg="Stopping pod sandbox: 57f2066db984639882b439d7545032c474e8edbe4c240abca1aec73198d7833f" id=06f96b9f-751a-48ec-9841-da71f248325b name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.389085870Z" level=info msg="Stopped pod sandbox (already stopped): 57f2066db984639882b439d7545032c474e8edbe4c240abca1aec73198d7833f" id=06f96b9f-751a-48ec-9841-da71f248325b name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.389370604Z" level=info msg="Removing pod sandbox: 57f2066db984639882b439d7545032c474e8edbe4c240abca1aec73198d7833f" id=167bc85c-5aad-4ec5-8d00-c7546b1ae549 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:08:49 addons-237678 crio[1041]: time="2024-12-13 19:08:49.396439757Z" level=info msg="Removed pod sandbox: 57f2066db984639882b439d7545032c474e8edbe4c240abca1aec73198d7833f" id=167bc85c-5aad-4ec5-8d00-c7546b1ae549 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8bc4d923d8c06       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   8f4a46f532ce1       hello-world-app-55bf9c44b4-nw6xv
	28b1021684d14       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         5 minutes ago       Running             nginx                     0                   8be5ed4885fab       nginx
	043729a8f1a04       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   710d41dd224bd       busybox
	fa05dedb223af       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   7 minutes ago       Running             metrics-server            0                   29b6c4fe6108c       metrics-server-84c5f94fbc-p2h9p
	c480313fefdec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        8 minutes ago       Running             coredns                   0                   bb9d16d7e5dff       coredns-7c65d6cfc9-vdvvc
	2534fa12b02a5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   b6917787d0dcd       storage-provisioner
	d87bbe9c87d8d       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3                      8 minutes ago       Running             kindnet-cni               0                   603c780b2c72a       kindnet-f9dml
	9f5557cd0de04       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        8 minutes ago       Running             kube-proxy                0                   275c47aca081f       kube-proxy-8xhqt
	b30b864697aec       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        8 minutes ago       Running             kube-scheduler            0                   c6c6b6d835e3e       kube-scheduler-addons-237678
	96317b3727960       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        8 minutes ago       Running             kube-controller-manager   0                   1243d5f6d7c66       kube-controller-manager-addons-237678
	768b5c4c34a15       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        8 minutes ago       Running             kube-apiserver            0                   e7a1cca37bfd5       kube-apiserver-addons-237678
	2c5f3c09909f8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   e6956a31cb336       etcd-addons-237678
	
	
	==> coredns [c480313fefdecf16380dfbeabbfeb8dc349156fa709bad48529509ca023d876c] <==
	[INFO] 10.244.0.22:57033 - 26630 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00579033s
	[INFO] 10.244.0.22:60984 - 16431 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005933155s
	[INFO] 10.244.0.22:57033 - 12808 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071126s
	[INFO] 10.244.0.22:39702 - 16922 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006183035s
	[INFO] 10.244.0.22:48363 - 56043 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000330511s
	[INFO] 10.244.0.22:60984 - 18140 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000084295s
	[INFO] 10.244.0.22:39702 - 3756 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000162127s
	[INFO] 10.244.0.22:38474 - 41974 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005910026s
	[INFO] 10.244.0.22:56711 - 32535 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006103731s
	[INFO] 10.244.0.22:58363 - 16222 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006271197s
	[INFO] 10.244.0.22:49857 - 8505 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006003695s
	[INFO] 10.244.0.22:45515 - 6193 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006257124s
	[INFO] 10.244.0.22:54983 - 51870 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006453432s
	[INFO] 10.244.0.22:56711 - 51086 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004122402s
	[INFO] 10.244.0.22:45515 - 46542 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00464936s
	[INFO] 10.244.0.22:38474 - 15089 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004294458s
	[INFO] 10.244.0.22:49857 - 56217 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00498444s
	[INFO] 10.244.0.22:54983 - 11027 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005422133s
	[INFO] 10.244.0.22:58363 - 3072 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005706905s
	[INFO] 10.244.0.22:38474 - 5083 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000076404s
	[INFO] 10.244.0.22:56711 - 56124 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000409123s
	[INFO] 10.244.0.22:45515 - 21786 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000186104s
	[INFO] 10.244.0.22:54983 - 61784 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000170399s
	[INFO] 10.244.0.22:58363 - 39892 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000190901s
	[INFO] 10.244.0.22:49857 - 24664 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000178957s
	
	
	==> describe nodes <==
	Name:               addons-237678
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-237678
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=addons-237678
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_13T19_02_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-237678
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:02:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-237678
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:11:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Dec 2024 19:08:25 +0000   Fri, 13 Dec 2024 19:02:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Dec 2024 19:08:25 +0000   Fri, 13 Dec 2024 19:02:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Dec 2024 19:08:25 +0000   Fri, 13 Dec 2024 19:02:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Dec 2024 19:08:25 +0000   Fri, 13 Dec 2024 19:03:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-237678
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 a39f8add46c4434a84f945353a7f0dd2
	  System UUID:                3db003e5-459d-48ce-93a9-cf79d8436984
	  Boot ID:                    c9637a07-3c27-4cb7-b1b1-da5edcdac29f
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  default                     hello-world-app-55bf9c44b4-nw6xv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 coredns-7c65d6cfc9-vdvvc                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m27s
	  kube-system                 etcd-addons-237678                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m32s
	  kube-system                 kindnet-f9dml                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m27s
	  kube-system                 kube-apiserver-addons-237678             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 kube-controller-manager-addons-237678    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 kube-proxy-8xhqt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-scheduler-addons-237678             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 metrics-server-84c5f94fbc-p2h9p          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         8m23s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m22s                  kube-proxy       
	  Normal   Starting                 8m37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m37s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m37s (x8 over 8m37s)  kubelet          Node addons-237678 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m37s (x8 over 8m37s)  kubelet          Node addons-237678 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m37s (x7 over 8m37s)  kubelet          Node addons-237678 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m32s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m32s                  kubelet          Node addons-237678 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m32s                  kubelet          Node addons-237678 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m32s                  kubelet          Node addons-237678 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m28s                  node-controller  Node addons-237678 event: Registered Node addons-237678 in Controller
	  Normal   NodeReady                8m8s                   kubelet          Node addons-237678 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000810] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000872] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000934] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000890] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.642001] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025181] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.037072] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.033073] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +7.269869] kauditd_printk_skb: 46 callbacks suppressed
	[Dec13 19:05] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa e4 f8 54 51 d4 5e c0 af 95 ac 60 08 00
	[  +1.027832] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa e4 f8 54 51 d4 5e c0 af 95 ac 60 08 00
	[  +2.015864] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa e4 f8 54 51 d4 5e c0 af 95 ac 60 08 00
	[  +4.159712] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: fa e4 f8 54 51 d4 5e c0 af 95 ac 60 08 00
	[Dec13 19:06] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa e4 f8 54 51 d4 5e c0 af 95 ac 60 08 00
	[ +16.122837] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa e4 f8 54 51 d4 5e c0 af 95 ac 60 08 00
	[ +33.533567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa e4 f8 54 51 d4 5e c0 af 95 ac 60 08 00
	
	
	==> etcd [2c5f3c09909f80d5df93ac80c1ca2c3e74919eae5fc1a5b3c16fcefdedbd6f06] <==
	{"level":"warn","ts":"2024-12-13T19:02:57.412382Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.79144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-8xhqt\" ","response":"range_response_count:1 size:4833"}
	{"level":"info","ts":"2024-12-13T19:02:57.424561Z","caller":"traceutil/trace.go:171","msg":"trace[518213457] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-8xhqt; range_end:; response_count:1; response_revision:412; }","duration":"194.963283ms","start":"2024-12-13T19:02:57.229581Z","end":"2024-12-13T19:02:57.424544Z","steps":["trace[518213457] 'agreement among raft nodes before linearized reading'  (duration: 182.76233ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:02:57.529280Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.117592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-12-13T19:02:57.531513Z","caller":"traceutil/trace.go:171","msg":"trace[141944507] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:412; }","duration":"107.372517ms","start":"2024-12-13T19:02:57.424132Z","end":"2024-12-13T19:02:57.531504Z","steps":["trace[141944507] 'range keys from in-memory index tree'  (duration: 105.051819ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:02:57.529614Z","caller":"traceutil/trace.go:171","msg":"trace[715618318] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"105.452251ms","start":"2024-12-13T19:02:57.424146Z","end":"2024-12-13T19:02:57.529598Z","steps":["trace[715618318] 'process raft request'  (duration: 83.784366ms)","trace[715618318] 'compare'  (duration: 21.103338ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-13T19:02:57.529755Z","caller":"traceutil/trace.go:171","msg":"trace[1011888473] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"105.349915ms","start":"2024-12-13T19:02:57.424396Z","end":"2024-12-13T19:02:57.529746Z","steps":["trace[1011888473] 'process raft request'  (duration: 105.049786ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:02:57.530150Z","caller":"traceutil/trace.go:171","msg":"trace[1053731736] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"105.6439ms","start":"2024-12-13T19:02:57.424495Z","end":"2024-12-13T19:02:57.530139Z","steps":["trace[1053731736] 'process raft request'  (duration: 105.007817ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:02:58.508033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.048898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-12-13T19:02:58.508215Z","caller":"traceutil/trace.go:171","msg":"trace[2101835278] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:475; }","duration":"189.236433ms","start":"2024-12-13T19:02:58.318964Z","end":"2024-12-13T19:02:58.508201Z","steps":["trace[2101835278] 'agreement among raft nodes before linearized reading'  (duration: 189.018458ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:02:58.508268Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.623514ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"warn","ts":"2024-12-13T19:02:58.508419Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.298571ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:02:58.508765Z","caller":"traceutil/trace.go:171","msg":"trace[285053497] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:475; }","duration":"189.639321ms","start":"2024-12-13T19:02:58.319112Z","end":"2024-12-13T19:02:58.508751Z","steps":["trace[285053497] 'agreement among raft nodes before linearized reading'  (duration: 189.284697ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:02:58.508104Z","caller":"traceutil/trace.go:171","msg":"trace[514971690] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"179.889947ms","start":"2024-12-13T19:02:58.328199Z","end":"2024-12-13T19:02:58.508089Z","steps":["trace[514971690] 'process raft request'  (duration: 95.211676ms)","trace[514971690] 'compare'  (duration: 84.133929ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-13T19:02:58.508059Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.633376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:02:58.509063Z","caller":"traceutil/trace.go:171","msg":"trace[1132366134] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io; range_end:; response_count:0; response_revision:475; }","duration":"189.639304ms","start":"2024-12-13T19:02:58.319412Z","end":"2024-12-13T19:02:58.509051Z","steps":["trace[1132366134] 'agreement among raft nodes before linearized reading'  (duration: 188.613809ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:02:58.508657Z","caller":"traceutil/trace.go:171","msg":"trace[1651366501] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:475; }","duration":"190.009596ms","start":"2024-12-13T19:02:58.318636Z","end":"2024-12-13T19:02:58.508646Z","steps":["trace[1651366501] 'agreement among raft nodes before linearized reading'  (duration: 189.607172ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:08.691693Z","caller":"traceutil/trace.go:171","msg":"trace[444463932] transaction","detail":"{read_only:false; response_revision:1133; number_of_response:1; }","duration":"104.310468ms","start":"2024-12-13T19:04:08.587356Z","end":"2024-12-13T19:04:08.691666Z","steps":["trace[444463932] 'process raft request'  (duration: 104.195414ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:08.897222Z","caller":"traceutil/trace.go:171","msg":"trace[453905227] transaction","detail":"{read_only:false; response_revision:1136; number_of_response:1; }","duration":"114.705815ms","start":"2024-12-13T19:04:08.782495Z","end":"2024-12-13T19:04:08.897201Z","steps":["trace[453905227] 'process raft request'  (duration: 35.816821ms)","trace[453905227] 'compare'  (duration: 78.780002ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-13T19:04:09.150471Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.202861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-12-13T19:04:09.150511Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.696987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/amd-gpu-device-plugin-bl7z9\" ","response":"range_response_count:1 size:4338"}
	{"level":"info","ts":"2024-12-13T19:04:09.150545Z","caller":"traceutil/trace.go:171","msg":"trace[2083324233] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1136; }","duration":"139.298512ms","start":"2024-12-13T19:04:09.011232Z","end":"2024-12-13T19:04:09.150531Z","steps":["trace[2083324233] 'range keys from in-memory index tree'  (duration: 139.089038ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:09.150560Z","caller":"traceutil/trace.go:171","msg":"trace[738570761] range","detail":"{range_begin:/registry/pods/kube-system/amd-gpu-device-plugin-bl7z9; range_end:; response_count:1; response_revision:1136; }","duration":"133.751762ms","start":"2024-12-13T19:04:09.016797Z","end":"2024-12-13T19:04:09.150548Z","steps":["trace[738570761] 'range keys from in-memory index tree'  (duration: 133.592583ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:46.892913Z","caller":"traceutil/trace.go:171","msg":"trace[604936375] transaction","detail":"{read_only:false; response_revision:1270; number_of_response:1; }","duration":"115.826269ms","start":"2024-12-13T19:04:46.777060Z","end":"2024-12-13T19:04:46.892887Z","steps":["trace[604936375] 'process raft request'  (duration: 115.561063ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:47.002866Z","caller":"traceutil/trace.go:171","msg":"trace[1006121228] transaction","detail":"{read_only:false; response_revision:1271; number_of_response:1; }","duration":"106.981938ms","start":"2024-12-13T19:04:46.895869Z","end":"2024-12-13T19:04:47.002851Z","steps":["trace[1006121228] 'process raft request'  (duration: 67.969982ms)","trace[1006121228] 'compare'  (duration: 38.924556ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-13T19:06:00.058196Z","caller":"traceutil/trace.go:171","msg":"trace[1753636216] transaction","detail":"{read_only:false; response_revision:1664; number_of_response:1; }","duration":"116.964381ms","start":"2024-12-13T19:05:59.941214Z","end":"2024-12-13T19:06:00.058179Z","steps":["trace[1753636216] 'process raft request'  (duration: 116.758092ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:11:21 up 53 min,  0 users,  load average: 0.02, 0.26, 0.22
	Linux addons-237678 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d87bbe9c87d8df97046802a4349a390a21d48e7394a3237cafc3c39f5e1e0aa5] <==
	I1213 19:09:13.031405       1 main.go:301] handling current node
	I1213 19:09:23.037466       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:09:23.037501       1 main.go:301] handling current node
	I1213 19:09:33.035778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:09:33.035812       1 main.go:301] handling current node
	I1213 19:09:43.034877       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:09:43.034914       1 main.go:301] handling current node
	I1213 19:09:53.035352       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:09:53.035385       1 main.go:301] handling current node
	I1213 19:10:03.028769       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:10:03.028799       1 main.go:301] handling current node
	I1213 19:10:13.028097       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:10:13.028193       1 main.go:301] handling current node
	I1213 19:10:23.037614       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:10:23.037647       1 main.go:301] handling current node
	I1213 19:10:33.037701       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:10:33.037737       1 main.go:301] handling current node
	I1213 19:10:43.028757       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:10:43.028801       1 main.go:301] handling current node
	I1213 19:10:53.028377       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:10:53.028432       1 main.go:301] handling current node
	I1213 19:11:03.028865       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:11:03.028904       1 main.go:301] handling current node
	I1213 19:11:13.031386       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:11:13.031424       1 main.go:301] handling current node
	
	
	==> kube-apiserver [768b5c4c34a157cc529fd7446e89d50e629687cc21ac3091d2f68b6567aed323] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1213 19:04:53.953308       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.182.114:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.182.114:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.182.114:443: connect: connection refused" logger="UnhandledError"
	I1213 19:04:53.985224       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 19:05:15.282984       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43348: use of closed network connection
	E1213 19:05:15.441662       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43374: use of closed network connection
	I1213 19:05:24.411840       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.226.224"}
	I1213 19:05:30.166719       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1213 19:05:31.281752       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1213 19:05:35.602575       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1213 19:05:35.777233       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.134.99"}
	I1213 19:06:00.136341       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1213 19:06:14.882102       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1213 19:06:26.921832       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:26.921884       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:06:26.939583       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:26.939755       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:06:26.951069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:26.951126       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:06:26.963086       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:26.963122       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1213 19:06:27.940603       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1213 19:06:28.007543       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1213 19:06:28.014242       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1213 19:08:01.015249       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.253.170"}
	
	
	==> kube-controller-manager [96317b37279608346b656d04c5751c4dbf8c3afa5b8582c2f95863f211ed2986] <==
	E1213 19:08:32.464459       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:08:57.696274       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:08:57.696323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:09:04.329380       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:09:04.329423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:09:08.066646       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:09:08.066688       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:09:13.508945       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:09:13.508986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:09:50.398804       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:09:50.398852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:09:55.874098       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:09:55.874141       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:09:56.257399       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:09:56.257436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:10:03.578859       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:10:03.578899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:10:26.801757       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:10:26.801800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:10:45.994043       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:10:45.994095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:10:47.637439       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:10:47.637483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:10:52.237280       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:10:52.237325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [9f5557cd0de04fe8a97c679f6ad06c7babd49474c6bc16794a8ede5ad2e75a65] <==
	I1213 19:02:56.911144       1 server_linux.go:66] "Using iptables proxy"
	I1213 19:02:58.111944       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1213 19:02:58.112087       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:02:58.531727       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 19:02:58.531788       1 server_linux.go:169] "Using iptables Proxier"
	I1213 19:02:58.610889       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:02:58.611812       1 server.go:483] "Version info" version="v1.31.2"
	I1213 19:02:58.612206       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:02:58.614953       1 config.go:199] "Starting service config controller"
	I1213 19:02:58.614972       1 config.go:328] "Starting node config controller"
	I1213 19:02:58.614974       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1213 19:02:58.614986       1 config.go:105] "Starting endpoint slice config controller"
	I1213 19:02:58.614995       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1213 19:02:58.614984       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1213 19:02:58.715834       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1213 19:02:58.715850       1 shared_informer.go:320] Caches are synced for node config
	I1213 19:02:58.715877       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [b30b864697aec099887caf6ae171db5768501c79191dccddf3cecf1fc4f9dc8d] <==
	W1213 19:02:46.829758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 19:02:46.829838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:46.829857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1213 19:02:46.829885       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:46.829971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1213 19:02:46.829978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1213 19:02:46.829995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1213 19:02:46.830000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:46.829575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 19:02:46.830031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:46.829592       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 19:02:46.830064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:46.829767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1213 19:02:46.830097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:46.830230       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1213 19:02:46.830254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:47.634169       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1213 19:02:47.634209       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1213 19:02:47.737007       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 19:02:47.737043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:47.867352       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 19:02:47.867388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1213 19:02:47.919781       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 19:02:47.919825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1213 19:02:50.228342       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 19:09:29 addons-237678 kubelet[1640]: E1213 19:09:29.294282    1640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116969293963974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:09:36 addons-237678 kubelet[1640]: I1213 19:09:36.120714    1640 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 19:09:39 addons-237678 kubelet[1640]: E1213 19:09:39.296457    1640 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116979296228962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:09:39 addons-237678 kubelet[1640]: E1213 19:09:39.296486    1640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116979296228962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:09:49 addons-237678 kubelet[1640]: E1213 19:09:49.299091    1640 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116989298800529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:09:49 addons-237678 kubelet[1640]: E1213 19:09:49.299131    1640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116989298800529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:09:59 addons-237678 kubelet[1640]: E1213 19:09:59.301231    1640 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116999300980313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:09:59 addons-237678 kubelet[1640]: E1213 19:09:59.301263    1640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116999300980313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:09 addons-237678 kubelet[1640]: E1213 19:10:09.303731    1640 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117009303479514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:09 addons-237678 kubelet[1640]: E1213 19:10:09.303769    1640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117009303479514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:19 addons-237678 kubelet[1640]: E1213 19:10:19.305754    1640 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117019305508692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:19 addons-237678 kubelet[1640]: E1213 19:10:19.305785    1640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117019305508692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:29 addons-237678 kubelet[1640]: E1213 19:10:29.307827    1640 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117029307610268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:29 addons-237678 kubelet[1640]: E1213 19:10:29.307864    1640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117029307610268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:39 addons-237678 kubelet[1640]: E1213 19:10:39.309954    1640 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117039309730546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:39 addons-237678 kubelet[1640]: E1213 19:10:39.309992    1640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117039309730546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:49 addons-237678 kubelet[1640]: E1213 19:10:49.312492    1640 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117049312251962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:49 addons-237678 kubelet[1640]: E1213 19:10:49.312526    1640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117049312251962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:53 addons-237678 kubelet[1640]: I1213 19:10:53.120272    1640 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 19:10:59 addons-237678 kubelet[1640]: E1213 19:10:59.316391    1640 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117059316122572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:59 addons-237678 kubelet[1640]: E1213 19:10:59.316423    1640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117059316122572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:11:09 addons-237678 kubelet[1640]: E1213 19:11:09.319124    1640 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117069318831645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:11:09 addons-237678 kubelet[1640]: E1213 19:11:09.319155    1640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117069318831645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:11:19 addons-237678 kubelet[1640]: E1213 19:11:19.321515    1640 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117079321278521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:11:19 addons-237678 kubelet[1640]: E1213 19:11:19.321555    1640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117079321278521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [2534fa12b02a5babd54edc685b232b2e6932f85b1d900f193792502cb9b3863d] <==
	I1213 19:03:14.317726       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 19:03:14.326350       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 19:03:14.326404       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 19:03:14.332512       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 19:03:14.332673       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-237678_4c270fd9-af4a-43bd-b164-6ce955f2bfb9!
	I1213 19:03:14.333652       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad7c13ff-a318-449b-9520-fc6d6f2d250a", APIVersion:"v1", ResourceVersion:"930", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-237678_4c270fd9-af4a-43bd-b164-6ce955f2bfb9 became leader
	I1213 19:03:14.432853       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-237678_4c270fd9-af4a-43bd-b164-6ce955f2bfb9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-237678 -n addons-237678
helpers_test.go:261: (dbg) Run:  kubectl --context addons-237678 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (358.87s)

                                                
                                    

Test pass (302/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.28
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 16.94
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.81
18 TestDownloadOnly/v1.31.2/DeleteAll 0.2
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.07
21 TestBinaryMirror 0.76
22 TestOffline 50.76
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 169.77
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 10.45
35 TestAddons/parallel/Registry 16.94
37 TestAddons/parallel/InspektorGadget 11.63
40 TestAddons/parallel/CSI 53.06
41 TestAddons/parallel/Headlamp 17.41
42 TestAddons/parallel/CloudSpanner 5.5
43 TestAddons/parallel/LocalPath 60.78
44 TestAddons/parallel/NvidiaDevicePlugin 6.45
45 TestAddons/parallel/Yakd 11.62
46 TestAddons/parallel/AmdGpuDevicePlugin 5.45
47 TestAddons/StoppedEnableDisable 12.04
48 TestCertOptions 27.62
49 TestCertExpiration 221.22
51 TestForceSystemdFlag 30.64
52 TestForceSystemdEnv 38.49
54 TestKVMDriverInstallOrUpdate 5
58 TestErrorSpam/setup 23.2
59 TestErrorSpam/start 0.57
60 TestErrorSpam/status 0.87
61 TestErrorSpam/pause 1.51
62 TestErrorSpam/unpause 1.6
63 TestErrorSpam/stop 1.36
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 45.34
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.83
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.91
75 TestFunctional/serial/CacheCmd/cache/add_local 2.06
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 31.97
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.34
86 TestFunctional/serial/LogsFileCmd 1.36
87 TestFunctional/serial/InvalidService 4.04
89 TestFunctional/parallel/ConfigCmd 0.38
90 TestFunctional/parallel/DashboardCmd 10.45
91 TestFunctional/parallel/DryRun 0.33
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.94
97 TestFunctional/parallel/ServiceCmdConnect 24.7
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 44.24
101 TestFunctional/parallel/SSHCmd 0.59
102 TestFunctional/parallel/CpCmd 1.71
103 TestFunctional/parallel/MySQL 23.93
104 TestFunctional/parallel/FileSync 0.27
105 TestFunctional/parallel/CertSync 1.64
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
113 TestFunctional/parallel/License 1.15
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.19
121 TestFunctional/parallel/ImageCommands/Setup 1.91
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.2
125 TestFunctional/parallel/Version/short 0.06
126 TestFunctional/parallel/Version/components 0.5
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.19
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.77
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
135 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
140 TestFunctional/parallel/ServiceCmd/DeployApp 19.21
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
142 TestFunctional/parallel/ProfileCmd/profile_list 0.36
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
144 TestFunctional/parallel/MountCmd/any-port 12.05
145 TestFunctional/parallel/ServiceCmd/List 1.73
146 TestFunctional/parallel/ServiceCmd/JSONOutput 1.68
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
148 TestFunctional/parallel/ServiceCmd/Format 0.52
149 TestFunctional/parallel/ServiceCmd/URL 0.57
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
153 TestFunctional/parallel/MountCmd/specific-port 1.53
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 104.3
162 TestMultiControlPlane/serial/DeployApp 6.14
163 TestMultiControlPlane/serial/PingHostFromPods 1.02
164 TestMultiControlPlane/serial/AddWorkerNode 33.97
165 TestMultiControlPlane/serial/NodeLabels 0.06
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
167 TestMultiControlPlane/serial/CopyFile 16.24
168 TestMultiControlPlane/serial/StopSecondaryNode 12.51
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
170 TestMultiControlPlane/serial/RestartSecondaryNode 43.93
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 203.94
173 TestMultiControlPlane/serial/DeleteSecondaryNode 11.37
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
175 TestMultiControlPlane/serial/StopCluster 35.43
176 TestMultiControlPlane/serial/RestartCluster 58.91
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
178 TestMultiControlPlane/serial/AddSecondaryNode 47.37
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
183 TestJSONOutput/start/Command 44.54
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.66
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.59
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.74
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
208 TestKicCustomNetwork/create_custom_network 35.82
209 TestKicCustomNetwork/use_default_bridge_network 22.89
210 TestKicExistingNetwork 25.85
211 TestKicCustomSubnet 23.8
212 TestKicStaticIP 26.59
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 45.74
217 TestMountStart/serial/StartWithMountFirst 6.4
218 TestMountStart/serial/VerifyMountFirst 0.25
219 TestMountStart/serial/StartWithMountSecond 6.21
220 TestMountStart/serial/VerifyMountSecond 0.25
221 TestMountStart/serial/DeleteFirst 1.61
222 TestMountStart/serial/VerifyMountPostDelete 0.25
223 TestMountStart/serial/Stop 1.18
224 TestMountStart/serial/RestartStopped 7.95
225 TestMountStart/serial/VerifyMountPostStop 0.25
228 TestMultiNode/serial/FreshStart2Nodes 70.43
229 TestMultiNode/serial/DeployApp2Nodes 5.33
230 TestMultiNode/serial/PingHostFrom2Pods 0.72
231 TestMultiNode/serial/AddNode 32.41
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.63
234 TestMultiNode/serial/CopyFile 9.2
235 TestMultiNode/serial/StopNode 2.12
236 TestMultiNode/serial/StartAfterStop 8.91
237 TestMultiNode/serial/RestartKeepsNodes 79.59
238 TestMultiNode/serial/DeleteNode 4.98
239 TestMultiNode/serial/StopMultiNode 23.72
240 TestMultiNode/serial/RestartMultiNode 53.19
241 TestMultiNode/serial/ValidateNameConflict 22.63
246 TestPreload 118.69
248 TestScheduledStopUnix 96.22
251 TestInsufficientStorage 12.4
252 TestRunningBinaryUpgrade 150.73
254 TestKubernetesUpgrade 346.34
255 TestMissingContainerUpgrade 116.99
256 TestStoppedBinaryUpgrade/Setup 2.7
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
259 TestNoKubernetes/serial/StartWithK8s 29.49
260 TestStoppedBinaryUpgrade/Upgrade 130.58
261 TestNoKubernetes/serial/StartWithStopK8s 7.36
262 TestNoKubernetes/serial/Start 13.15
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
264 TestNoKubernetes/serial/ProfileList 0.79
273 TestPause/serial/Start 48.49
274 TestNoKubernetes/serial/Stop 1.24
275 TestNoKubernetes/serial/StartNoArgs 7.31
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
277 TestPause/serial/SecondStartNoReconfiguration 35.58
278 TestStoppedBinaryUpgrade/MinikubeLogs 0.81
279 TestPause/serial/Pause 0.78
280 TestPause/serial/VerifyStatus 0.37
281 TestPause/serial/Unpause 0.68
282 TestPause/serial/PauseAgain 0.89
283 TestPause/serial/DeletePaused 4.46
284 TestPause/serial/VerifyDeletedResources 0.79
292 TestNetworkPlugins/group/false 7.94
297 TestStartStop/group/old-k8s-version/serial/FirstStart 119.48
299 TestStartStop/group/no-preload/serial/FirstStart 57.3
300 TestStartStop/group/no-preload/serial/DeployApp 10.26
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
302 TestStartStop/group/no-preload/serial/Stop 11.85
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
304 TestStartStop/group/no-preload/serial/SecondStart 262.63
305 TestStartStop/group/old-k8s-version/serial/DeployApp 9.39
306 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.83
307 TestStartStop/group/old-k8s-version/serial/Stop 11.84
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
309 TestStartStop/group/old-k8s-version/serial/SecondStart 126.77
311 TestStartStop/group/embed-certs/serial/FirstStart 46.73
312 TestStartStop/group/embed-certs/serial/DeployApp 10.26
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.88
314 TestStartStop/group/embed-certs/serial/Stop 11.89
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
316 TestStartStop/group/embed-certs/serial/SecondStart 262.19
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 47
320 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
321 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.32
322 TestStartStop/group/old-k8s-version/serial/Pause 3.04
324 TestStartStop/group/newest-cni/serial/FirstStart 28.98
325 TestStartStop/group/newest-cni/serial/DeployApp 0
326 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.76
327 TestStartStop/group/newest-cni/serial/Stop 1.19
328 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
329 TestStartStop/group/newest-cni/serial/SecondStart 12.91
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
334 TestStartStop/group/newest-cni/serial/Pause 3.11
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.89
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.97
337 TestNetworkPlugins/group/auto/Start 42.65
338 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.06
341 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
342 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
343 TestStartStop/group/no-preload/serial/Pause 3.21
344 TestNetworkPlugins/group/kindnet/Start 44.49
345 TestNetworkPlugins/group/auto/KubeletFlags 0.31
346 TestNetworkPlugins/group/auto/NetCatPod 9.22
347 TestNetworkPlugins/group/auto/DNS 0.13
348 TestNetworkPlugins/group/auto/Localhost 0.11
349 TestNetworkPlugins/group/auto/HairPin 0.11
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestNetworkPlugins/group/calico/Start 58.8
352 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
353 TestNetworkPlugins/group/kindnet/NetCatPod 8.2
354 TestNetworkPlugins/group/kindnet/DNS 0.14
355 TestNetworkPlugins/group/kindnet/Localhost 0.11
356 TestNetworkPlugins/group/kindnet/HairPin 0.1
357 TestNetworkPlugins/group/custom-flannel/Start 48.79
358 TestNetworkPlugins/group/calico/ControllerPod 6.01
359 TestNetworkPlugins/group/calico/KubeletFlags 0.27
360 TestNetworkPlugins/group/calico/NetCatPod 11.16
361 TestNetworkPlugins/group/calico/DNS 0.12
362 TestNetworkPlugins/group/calico/Localhost 0.11
363 TestNetworkPlugins/group/calico/HairPin 0.1
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.2
366 TestNetworkPlugins/group/custom-flannel/DNS 0.13
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
369 TestNetworkPlugins/group/flannel/Start 57.05
370 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
371 TestNetworkPlugins/group/bridge/Start 38.74
372 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
373 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
374 TestStartStop/group/embed-certs/serial/Pause 2.8
375 TestNetworkPlugins/group/enable-default-cni/Start 66.67
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
377 TestNetworkPlugins/group/bridge/NetCatPod 10.18
378 TestNetworkPlugins/group/flannel/ControllerPod 6.01
379 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
380 TestNetworkPlugins/group/flannel/NetCatPod 8.17
381 TestNetworkPlugins/group/bridge/DNS 16.11
382 TestNetworkPlugins/group/flannel/DNS 0.13
383 TestNetworkPlugins/group/flannel/Localhost 0.11
384 TestNetworkPlugins/group/flannel/HairPin 0.1
385 TestNetworkPlugins/group/bridge/Localhost 0.11
386 TestNetworkPlugins/group/bridge/HairPin 0.12
387 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
388 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.21
389 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
393 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
394 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
395 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.75
x
+
TestDownloadOnly/v1.20.0/json-events (19.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-425605 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-425605 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (19.275625206s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (19.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1213 19:01:54.204574   22695 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1213 19:01:54.204676   22695 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-15903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-425605
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-425605: exit status 85 (60.492118ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-425605 | jenkins | v1.34.0 | 13 Dec 24 19:01 UTC |          |
	|         | -p download-only-425605        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 19:01:34
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:01:34.967797   22708 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:01:34.967912   22708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:01:34.967920   22708 out.go:358] Setting ErrFile to fd 2...
	I1213 19:01:34.967925   22708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:01:34.968099   22708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
	W1213 19:01:34.968215   22708 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20090-15903/.minikube/config/config.json: open /home/jenkins/minikube-integration/20090-15903/.minikube/config/config.json: no such file or directory
	I1213 19:01:34.968751   22708 out.go:352] Setting JSON to true
	I1213 19:01:34.969655   22708 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2639,"bootTime":1734113856,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 19:01:34.969714   22708 start.go:139] virtualization: kvm guest
	I1213 19:01:34.972152   22708 out.go:97] [download-only-425605] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1213 19:01:34.972260   22708 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20090-15903/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 19:01:34.972312   22708 notify.go:220] Checking for updates...
	I1213 19:01:34.973736   22708 out.go:169] MINIKUBE_LOCATION=20090
	I1213 19:01:34.975186   22708 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:01:34.976493   22708 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20090-15903/kubeconfig
	I1213 19:01:34.977692   22708 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-15903/.minikube
	I1213 19:01:34.979039   22708 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 19:01:34.981526   22708 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 19:01:34.981710   22708 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:01:35.003030   22708 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1213 19:01:35.003095   22708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:01:35.376118   22708 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-13 19:01:35.367614421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 19:01:35.376220   22708 docker.go:318] overlay module found
	I1213 19:01:35.377817   22708 out.go:97] Using the docker driver based on user configuration
	I1213 19:01:35.377842   22708 start.go:297] selected driver: docker
	I1213 19:01:35.377847   22708 start.go:901] validating driver "docker" against <nil>
	I1213 19:01:35.377926   22708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:01:35.422215   22708 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-13 19:01:35.414294078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 19:01:35.422375   22708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 19:01:35.422924   22708 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1213 19:01:35.423121   22708 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 19:01:35.425113   22708 out.go:169] Using Docker driver with root privileges
	I1213 19:01:35.426486   22708 cni.go:84] Creating CNI manager for ""
	I1213 19:01:35.426559   22708 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:01:35.426573   22708 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 19:01:35.426640   22708 start.go:340] cluster config:
	{Name:download-only-425605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-425605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:01:35.428034   22708 out.go:97] Starting "download-only-425605" primary control-plane node in "download-only-425605" cluster
	I1213 19:01:35.428052   22708 cache.go:121] Beginning downloading kic base image for docker with crio
	I1213 19:01:35.429383   22708 out.go:97] Pulling base image v0.0.45-1734029593-20090 ...
	I1213 19:01:35.429406   22708 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1213 19:01:35.429454   22708 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon
	I1213 19:01:35.445332   22708 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 to local cache
	I1213 19:01:35.445503   22708 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory
	I1213 19:01:35.445586   22708 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 to local cache
	I1213 19:01:35.529836   22708 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1213 19:01:35.529887   22708 cache.go:56] Caching tarball of preloaded images
	I1213 19:01:35.530086   22708 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1213 19:01:35.532019   22708 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1213 19:01:35.532046   22708 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1213 19:01:35.635378   22708 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20090-15903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1213 19:01:47.283343   22708 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 as a tarball
	
	
	* The control-plane node download-only-425605 host does not exist
	  To start a cluster, run: "minikube start -p download-only-425605"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-425605
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (16.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-333411 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-333411 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (16.93674624s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (16.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1213 19:02:11.536723   22695 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1213 19:02:11.536766   22695 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-15903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-333411
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-333411: exit status 85 (809.718382ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-425605 | jenkins | v1.34.0 | 13 Dec 24 19:01 UTC |                     |
	|         | -p download-only-425605        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 13 Dec 24 19:01 UTC | 13 Dec 24 19:01 UTC |
	| delete  | -p download-only-425605        | download-only-425605 | jenkins | v1.34.0 | 13 Dec 24 19:01 UTC | 13 Dec 24 19:01 UTC |
	| start   | -o=json --download-only        | download-only-333411 | jenkins | v1.34.0 | 13 Dec 24 19:01 UTC |                     |
	|         | -p download-only-333411        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 19:01:54
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:01:54.640812   23114 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:01:54.640916   23114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:01:54.640923   23114 out.go:358] Setting ErrFile to fd 2...
	I1213 19:01:54.640928   23114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:01:54.641121   23114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
	I1213 19:01:54.641690   23114 out.go:352] Setting JSON to true
	I1213 19:01:54.642509   23114 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2659,"bootTime":1734113856,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 19:01:54.642570   23114 start.go:139] virtualization: kvm guest
	I1213 19:01:54.644853   23114 out.go:97] [download-only-333411] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 19:01:54.644995   23114 notify.go:220] Checking for updates...
	I1213 19:01:54.646479   23114 out.go:169] MINIKUBE_LOCATION=20090
	I1213 19:01:54.647921   23114 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:01:54.649217   23114 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20090-15903/kubeconfig
	I1213 19:01:54.650663   23114 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-15903/.minikube
	I1213 19:01:54.651933   23114 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 19:01:54.654287   23114 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 19:01:54.654525   23114 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:01:54.676058   23114 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1213 19:01:54.676127   23114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:01:54.723754   23114 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-13 19:01:54.715134774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 19:01:54.723854   23114 docker.go:318] overlay module found
	I1213 19:01:54.725582   23114 out.go:97] Using the docker driver based on user configuration
	I1213 19:01:54.725602   23114 start.go:297] selected driver: docker
	I1213 19:01:54.725607   23114 start.go:901] validating driver "docker" against <nil>
	I1213 19:01:54.725679   23114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:01:54.772126   23114 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-13 19:01:54.763686919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 19:01:54.772273   23114 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 19:01:54.772760   23114 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1213 19:01:54.772893   23114 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 19:01:54.774851   23114 out.go:169] Using Docker driver with root privileges
	I1213 19:01:54.776270   23114 cni.go:84] Creating CNI manager for ""
	I1213 19:01:54.776332   23114 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:01:54.776341   23114 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 19:01:54.776425   23114 start.go:340] cluster config:
	{Name:download-only-333411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-333411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:01:54.778026   23114 out.go:97] Starting "download-only-333411" primary control-plane node in "download-only-333411" cluster
	I1213 19:01:54.778040   23114 cache.go:121] Beginning downloading kic base image for docker with crio
	I1213 19:01:54.779461   23114 out.go:97] Pulling base image v0.0.45-1734029593-20090 ...
	I1213 19:01:54.779486   23114 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:01:54.779593   23114 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon
	I1213 19:01:54.797530   23114 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 to local cache
	I1213 19:01:54.797672   23114 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory
	I1213 19:01:54.797689   23114 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory, skipping pull
	I1213 19:01:54.797693   23114 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 exists in cache, skipping pull
	I1213 19:01:54.797703   23114 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 as a tarball
	I1213 19:01:55.267079   23114 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1213 19:01:55.267109   23114 cache.go:56] Caching tarball of preloaded images
	I1213 19:01:55.267239   23114 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:01:55.269190   23114 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1213 19:01:55.269203   23114 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1213 19:01:55.849552   23114 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/20090-15903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1213 19:02:09.467858   23114 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1213 19:02:09.467970   23114 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20090-15903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-333411 host does not exist
	  To start a cluster, run: "minikube start -p download-only-333411"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-333411
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.07s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-509470 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-509470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-509470
--- PASS: TestDownloadOnlyKic (1.07s)

                                                
                                    
x
+
TestBinaryMirror (0.76s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 19:02:14.008824   22695 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-428326 --alsologtostderr --binary-mirror http://127.0.0.1:41935 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-428326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-428326
--- PASS: TestBinaryMirror (0.76s)

                                                
                                    
x
+
TestOffline (50.76s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-253456 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-253456 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (47.076518028s)
helpers_test.go:175: Cleaning up "offline-crio-253456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-253456
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-253456: (3.687169321s)
--- PASS: TestOffline (50.76s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-237678
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-237678: exit status 85 (53.797371ms)

                                                
                                                
-- stdout --
	* Profile "addons-237678" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-237678"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-237678
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-237678: exit status 85 (57.770735ms)

                                                
                                                
-- stdout --
	* Profile "addons-237678" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-237678"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (169.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-237678 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-237678 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m49.773330356s)
--- PASS: TestAddons/Setup (169.77s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-237678 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-237678 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.45s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-237678 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-237678 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b8ceb780-fedf-4561-9b01-e1a78092ebc2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b8ceb780-fedf-4561-9b01-e1a78092ebc2] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.00359226s
addons_test.go:633: (dbg) Run:  kubectl --context addons-237678 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-237678 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-237678 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.45s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.574413ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-sgzjd" [dc9a854b-15a2-47cc-b4c8-0f7c608e5335] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00251954s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nnht8" [c1db19b5-cb0e-4cec-b6fb-69ed544cf362] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003672421s
addons_test.go:331: (dbg) Run:  kubectl --context addons-237678 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-237678 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-237678 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.126647229s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 ip
2024/12/13 19:05:40 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.94s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.63s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-mg9rg" [aba83b3b-f9b5-4a2b-8a61-2bcd64f86012] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003992337s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-237678 addons disable inspektor-gadget --alsologtostderr -v=1: (5.621069489s)
--- PASS: TestAddons/parallel/InspektorGadget (11.63s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 19:05:40.657113   22695 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 19:05:40.661893   22695 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 19:05:40.661915   22695 kapi.go:107] duration metric: took 4.826466ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.833749ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-237678 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-237678 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [67d4a3b8-036c-4ade-a3d7-255da01a1c03] Pending
helpers_test.go:344: "task-pv-pod" [67d4a3b8-036c-4ade-a3d7-255da01a1c03] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [67d4a3b8-036c-4ade-a3d7-255da01a1c03] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.0978087s
addons_test.go:511: (dbg) Run:  kubectl --context addons-237678 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-237678 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-237678 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-237678 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-237678 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-237678 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-237678 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [cc229adf-4b7f-4aa3-bac3-c252ef190de4] Pending
helpers_test.go:344: "task-pv-pod-restore" [cc229adf-4b7f-4aa3-bac3-c252ef190de4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [cc229adf-4b7f-4aa3-bac3-c252ef190de4] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003556947s
addons_test.go:553: (dbg) Run:  kubectl --context addons-237678 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-237678 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-237678 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-237678 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.588776336s)
--- PASS: TestAddons/parallel/CSI (53.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-237678 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-9glww" [072d913d-efbe-43d7-9336-1b6eba96cb23] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-9glww" [072d913d-efbe-43d7-9336-1b6eba96cb23] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00366471s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-237678 addons disable headlamp --alsologtostderr -v=1: (5.662211259s)
--- PASS: TestAddons/parallel/Headlamp (17.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-8x4zb" [a9365383-5691-4862-ac2c-cd2396490229] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003369589s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (60.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-237678 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-237678 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-237678 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [96df9040-2c25-4ef2-9d48-4b858ed26c00] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [96df9040-2c25-4ef2-9d48-4b858ed26c00] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [96df9040-2c25-4ef2-9d48-4b858ed26c00] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 10.00370708s
addons_test.go:906: (dbg) Run:  kubectl --context addons-237678 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 ssh "cat /opt/local-path-provisioner/pvc-44be87ee-926f-4202-9a14-cc59be04dc06_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-237678 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-237678 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-237678 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.944757956s)
--- PASS: TestAddons/parallel/LocalPath (60.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5ppp7" [c9d2d640-a841-4988-aaab-2a74cbfe5596] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003841694s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-cnsqc" [853482a3-7bc4-42eb-a36b-bac7dd740c94] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002968468s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-237678 addons disable yakd --alsologtostderr -v=1: (5.611690247s)
--- PASS: TestAddons/parallel/Yakd (11.62s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-bl7z9" [53b1759f-8dcc-4454-ba3e-6feaf74540e7] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003689589s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.04s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-237678
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-237678: (11.792713323s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-237678
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-237678
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-237678
--- PASS: TestAddons/StoppedEnableDisable (12.04s)

                                                
                                    
x
+
TestCertOptions (27.62s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-740521 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-740521 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.207298036s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-740521 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-740521 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-740521 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-740521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-740521
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-740521: (2.671648461s)
--- PASS: TestCertOptions (27.62s)

                                                
                                    
x
+
TestCertExpiration (221.22s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-879980 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-879980 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.646900203s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-879980 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-879980 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.014310207s)
helpers_test.go:175: Cleaning up "cert-expiration-879980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-879980
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-879980: (2.56275595s)
--- PASS: TestCertExpiration (221.22s)

                                                
                                    
x
+
TestForceSystemdFlag (30.64s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-282425 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-282425 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.259950321s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-282425 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-282425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-282425
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-282425: (4.044499711s)
--- PASS: TestForceSystemdFlag (30.64s)

                                                
                                    
x
+
TestForceSystemdEnv (38.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-277542 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-277542 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.5283506s)
helpers_test.go:175: Cleaning up "force-systemd-env-277542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-277542
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-277542: (2.956827018s)
--- PASS: TestForceSystemdEnv (38.49s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1213 19:41:10.835632   22695 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1213 19:41:10.835856   22695 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1213 19:41:10.868525   22695 install.go:62] docker-machine-driver-kvm2: exit status 1
W1213 19:41:10.868912   22695 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1213 19:41:10.868980   22695 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4237708/001/docker-machine-driver-kvm2
I1213 19:41:11.127749   22695 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4237708/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a080 0x530a080 0x530a080 0x530a080 0x530a080 0x530a080 0x530a080] Decompressors:map[bz2:0xc000693210 gz:0xc000693218 tar:0xc0006931c0 tar.bz2:0xc0006931d0 tar.gz:0xc0006931e0 tar.xz:0xc0006931f0 tar.zst:0xc000693200 tbz2:0xc0006931d0 tgz:0xc0006931e0 txz:0xc0006931f0 tzst:0xc000693200 xz:0xc000693220 zip:0xc000693230 zst:0xc000693228] Getters:map[file:0xc001b4d1f0 http:0xc000822960 https:0xc0008229b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code:
404. trying to get the common version
I1213 19:41:11.127804   22695 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4237708/001/docker-machine-driver-kvm2
I1213 19:41:13.829440   22695 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1213 19:41:13.829544   22695 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1213 19:41:13.874796   22695 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1213 19:41:13.874832   22695 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1213 19:41:13.874903   22695 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1213 19:41:13.874936   22695 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4237708/002/docker-machine-driver-kvm2
I1213 19:41:13.936032   22695 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4237708/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a080 0x530a080 0x530a080 0x530a080 0x530a080 0x530a080 0x530a080] Decompressors:map[bz2:0xc000693210 gz:0xc000693218 tar:0xc0006931c0 tar.bz2:0xc0006931d0 tar.gz:0xc0006931e0 tar.xz:0xc0006931f0 tar.zst:0xc000693200 tbz2:0xc0006931d0 tgz:0xc0006931e0 txz:0xc0006931f0 tzst:0xc000693200 xz:0xc000693220 zip:0xc000693230 zst:0xc000693228] Getters:map[file:0xc0008036b0 http:0xc000074820 https:0xc0000748c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code:
404. trying to get the common version
I1213 19:41:13.936092   22695 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4237708/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.00s)

                                                
                                    
x
+
TestErrorSpam/setup (23.2s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-212998 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-212998 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-212998 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-212998 --driver=docker  --container-runtime=crio: (23.198224343s)
--- PASS: TestErrorSpam/setup (23.20s)

                                                
                                    
x
+
TestErrorSpam/start (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 start --dry-run
--- PASS: TestErrorSpam/start (0.57s)

                                                
                                    
x
+
TestErrorSpam/status (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 status
--- PASS: TestErrorSpam/status (0.87s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 unpause
--- PASS: TestErrorSpam/unpause (1.60s)

                                                
                                    
x
+
TestErrorSpam/stop (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 stop: (1.170830263s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212998 --log_dir /tmp/nospam-212998 stop
--- PASS: TestErrorSpam/stop (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20090-15903/.minikube/files/etc/test/nested/copy/22695/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.34s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-660713 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-660713 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (45.341796131s)
--- PASS: TestFunctional/serial/StartWithProxy (45.34s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 19:12:59.395414   22695 config.go:182] Loaded profile config "functional-660713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-660713 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-660713 --alsologtostderr -v=8: (29.827825121s)
functional_test.go:663: soft start took 29.828649791s for "functional-660713" cluster.
I1213 19:13:29.223600   22695 config.go:182] Loaded profile config "functional-660713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (29.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-660713 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-660713 cache add registry.k8s.io/pause:3.3: (1.009883662s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-660713 /tmp/TestFunctionalserialCacheCmdcacheadd_local4011871362/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 cache add minikube-local-cache-test:functional-660713
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-660713 cache add minikube-local-cache-test:functional-660713: (1.733277799s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 cache delete minikube-local-cache-test:functional-660713
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-660713
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660713 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (270.753388ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 kubectl -- --context functional-660713 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-660713 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-660713 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-660713 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.971399241s)
functional_test.go:761: restart took 31.971526764s for "functional-660713" cluster.
I1213 19:14:08.684291   22695 config.go:182] Loaded profile config "functional-660713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (31.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-660713 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-660713 logs: (1.34446595s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 logs --file /tmp/TestFunctionalserialLogsFileCmd2970696555/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-660713 logs --file /tmp/TestFunctionalserialLogsFileCmd2970696555/001/logs.txt: (1.355253891s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-660713 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-660713
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-660713: exit status 115 (329.086051ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31205 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-660713 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660713 config get cpus: exit status 14 (74.999755ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660713 config get cpus: exit status 14 (54.605263ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-660713 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-660713 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 63417: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-660713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-660713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (140.853881ms)

                                                
                                                
-- stdout --
	* [functional-660713] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-15903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-15903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:14:42.179185   62621 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:14:42.179310   62621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:14:42.179328   62621 out.go:358] Setting ErrFile to fd 2...
	I1213 19:14:42.179333   62621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:14:42.179528   62621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
	I1213 19:14:42.180081   62621 out.go:352] Setting JSON to false
	I1213 19:14:42.181100   62621 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3426,"bootTime":1734113856,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 19:14:42.181182   62621 start.go:139] virtualization: kvm guest
	I1213 19:14:42.183312   62621 out.go:177] * [functional-660713] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 19:14:42.184786   62621 notify.go:220] Checking for updates...
	I1213 19:14:42.184825   62621 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 19:14:42.186422   62621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:14:42.187914   62621 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-15903/kubeconfig
	I1213 19:14:42.189801   62621 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-15903/.minikube
	I1213 19:14:42.191144   62621 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 19:14:42.192545   62621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:14:42.194324   62621 config.go:182] Loaded profile config "functional-660713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:14:42.194748   62621 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:14:42.217587   62621 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1213 19:14:42.217702   62621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:14:42.263379   62621 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-13 19:14:42.254790073 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 19:14:42.263492   62621 docker.go:318] overlay module found
	I1213 19:14:42.265264   62621 out.go:177] * Using the docker driver based on existing profile
	I1213 19:14:42.266607   62621 start.go:297] selected driver: docker
	I1213 19:14:42.266622   62621 start.go:901] validating driver "docker" against &{Name:functional-660713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-660713 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:14:42.266729   62621 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:14:42.269911   62621 out.go:201] 
	W1213 19:14:42.271308   62621 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 19:14:42.272676   62621 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-660713 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-660713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-660713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (145.056091ms)

                                                
                                                
-- stdout --
	* [functional-660713] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-15903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-15903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:14:42.037196   62547 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:14:42.037298   62547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:14:42.037311   62547 out.go:358] Setting ErrFile to fd 2...
	I1213 19:14:42.037316   62547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:14:42.037576   62547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
	I1213 19:14:42.038164   62547 out.go:352] Setting JSON to false
	I1213 19:14:42.039148   62547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3426,"bootTime":1734113856,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 19:14:42.039254   62547 start.go:139] virtualization: kvm guest
	I1213 19:14:42.041825   62547 out.go:177] * [functional-660713] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1213 19:14:42.043228   62547 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 19:14:42.043310   62547 notify.go:220] Checking for updates...
	I1213 19:14:42.045871   62547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:14:42.047322   62547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-15903/kubeconfig
	I1213 19:14:42.048635   62547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-15903/.minikube
	I1213 19:14:42.049846   62547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 19:14:42.051202   62547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:14:42.052827   62547 config.go:182] Loaded profile config "functional-660713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:14:42.053319   62547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:14:42.075529   62547 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1213 19:14:42.075619   62547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:14:42.122012   62547 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-13 19:14:42.113248463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 19:14:42.122124   62547 docker.go:318] overlay module found
	I1213 19:14:42.123993   62547 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1213 19:14:42.125409   62547 start.go:297] selected driver: docker
	I1213 19:14:42.125420   62547 start.go:901] validating driver "docker" against &{Name:functional-660713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-660713 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:14:42.125504   62547 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:14:42.127629   62547 out.go:201] 
	W1213 19:14:42.128953   62547 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 19:14:42.130254   62547 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (24.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-660713 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-660713 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-76wtv" [07f25647-200d-40fa-8c0c-dec4c8e22bd3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-76wtv" [07f25647-200d-40fa-8c0c-dec4c8e22bd3] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 24.003811257s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30388
functional_test.go:1675: http://192.168.49.2:30388: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-76wtv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30388
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (24.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1071a937-99e3-4ac2-b68b-4692b3e55cd5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003929063s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-660713 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-660713 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-660713 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-660713 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4f5a129d-b8b3-4d07-9e9d-3a0ea79d1b29] Pending
helpers_test.go:344: "sp-pod" [4f5a129d-b8b3-4d07-9e9d-3a0ea79d1b29] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4f5a129d-b8b3-4d07-9e9d-3a0ea79d1b29] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004141157s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-660713 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-660713 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-660713 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [26c5c022-612c-4d23-b592-0ccdd364472e] Pending
helpers_test.go:344: "sp-pod" [26c5c022-612c-4d23-b592-0ccdd364472e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [26c5c022-612c-4d23-b592-0ccdd364472e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004388513s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-660713 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh -n functional-660713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 cp functional-660713:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2685902805/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh -n functional-660713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh -n functional-660713 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-660713 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-c667l" [35ef7fb8-8d05-48de-9d7b-73b80eb0418e] Pending
helpers_test.go:344: "mysql-6cdb49bbb-c667l" [35ef7fb8-8d05-48de-9d7b-73b80eb0418e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-c667l" [35ef7fb8-8d05-48de-9d7b-73b80eb0418e] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.025610636s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-660713 exec mysql-6cdb49bbb-c667l -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-660713 exec mysql-6cdb49bbb-c667l -- mysql -ppassword -e "show databases;": exit status 1 (145.550233ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 19:14:38.456259   22695 retry.go:31] will retry after 1.260269864s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-660713 exec mysql-6cdb49bbb-c667l -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-660713 exec mysql-6cdb49bbb-c667l -- mysql -ppassword -e "show databases;": exit status 1 (208.067466ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 19:14:39.925257   22695 retry.go:31] will retry after 1.017820791s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-660713 exec mysql-6cdb49bbb-c667l -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.93s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/22695/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "sudo cat /etc/test/nested/copy/22695/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/22695.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "sudo cat /etc/ssl/certs/22695.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/22695.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "sudo cat /usr/share/ca-certificates/22695.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/226952.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "sudo cat /etc/ssl/certs/226952.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/226952.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "sudo cat /usr/share/ca-certificates/226952.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-660713 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660713 ssh "sudo systemctl is-active docker": exit status 1 (263.635849ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660713 ssh "sudo systemctl is-active containerd": exit status 1 (279.904416ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Done: out/minikube-linux-amd64 license: (1.152777823s)
--- PASS: TestFunctional/parallel/License (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-660713 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-660713 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-660713 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-660713 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 58466: os: process already finished
helpers_test.go:508: unable to kill pid 58167: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-660713 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-660713
localhost/kicbase/echo-server:functional-660713
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-660713 image ls --format short --alsologtostderr:
I1213 19:14:54.654900   65153 out.go:345] Setting OutFile to fd 1 ...
I1213 19:14:54.655047   65153 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:14:54.655059   65153 out.go:358] Setting ErrFile to fd 2...
I1213 19:14:54.655066   65153 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:14:54.655359   65153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
I1213 19:14:54.655988   65153 config.go:182] Loaded profile config "functional-660713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:14:54.656100   65153 config.go:182] Loaded profile config "functional-660713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:14:54.656538   65153 cli_runner.go:164] Run: docker container inspect functional-660713 --format={{.State.Status}}
I1213 19:14:54.675203   65153 ssh_runner.go:195] Run: systemctl --version
I1213 19:14:54.675250   65153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-660713
I1213 19:14:54.693021   65153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/functional-660713/id_rsa Username:docker}
I1213 19:14:54.792153   65153 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-660713 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| docker.io/library/nginx                 | alpine             | 91ca84b4f5779 | 54MB   |
| localhost/minikube-local-cache-test     | functional-660713  | 84c175baca36f | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| docker.io/library/nginx                 | latest             | 66f8bdd3810c9 | 196MB  |
| localhost/kicbase/echo-server           | functional-660713  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-660713 image ls --format table --alsologtostderr:
I1213 19:14:55.343176   65535 out.go:345] Setting OutFile to fd 1 ...
I1213 19:14:55.343305   65535 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:14:55.343318   65535 out.go:358] Setting ErrFile to fd 2...
I1213 19:14:55.343326   65535 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:14:55.343495   65535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
I1213 19:14:55.344076   65535 config.go:182] Loaded profile config "functional-660713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:14:55.344167   65535 config.go:182] Loaded profile config "functional-660713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:14:55.344521   65535 cli_runner.go:164] Run: docker container inspect functional-660713 --format={{.State.Status}}
I1213 19:14:55.362838   65535 ssh_runner.go:195] Run: systemctl --version
I1213 19:14:55.362899   65535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-660713
I1213 19:14:55.379967   65535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/functional-660713/id_rsa Username:docker}
I1213 19:14:55.471844   65535 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-660713 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-660713"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff
9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"0184c1613d92
931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["reg
istry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"84c175baca36f55d4ebd08814edc369fb30bc0d578bac9d1413fd2f93d531158","repoDigests":["localhost/minikube-local-cache-test@sha256:41a09d9db6c658e24f909f8d4077528982add4bd4a6b9b1ee2f4e2f4fe148d72"],"repoTags":["localhost/minikube-local-cache-test:functional-660713"],"size":"3330"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb050
6e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66","repoDigests":["docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4","docker.io/library/nginx@sha256:b
1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371"],"repoTags":["docker.io/library/nginx:alpine"],"size":"53958631"},{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"195919252"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0
d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-660713 image ls --format json --alsologtostderr:
I1213 19:14:55.117197   65405 out.go:345] Setting OutFile to fd 1 ...
I1213 19:14:55.117477   65405 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:14:55.117487   65405 out.go:358] Setting ErrFile to fd 2...
I1213 19:14:55.117491   65405 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:14:55.117716   65405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
I1213 19:14:55.118359   65405 config.go:182] Loaded profile config "functional-660713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:14:55.118468   65405 config.go:182] Loaded profile config "functional-660713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:14:55.118850   65405 cli_runner.go:164] Run: docker container inspect functional-660713 --format={{.State.Status}}
I1213 19:14:55.136847   65405 ssh_runner.go:195] Run: systemctl --version
I1213 19:14:55.136922   65405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-660713
I1213 19:14:55.155761   65405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/functional-660713/id_rsa Username:docker}
I1213 19:14:55.251643   65405 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-660713 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66
repoDigests:
- docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4
- docker.io/library/nginx@sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371
repoTags:
- docker.io/library/nginx:alpine
size: "53958631"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "195919252"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-660713
size: "4943877"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 84c175baca36f55d4ebd08814edc369fb30bc0d578bac9d1413fd2f93d531158
repoDigests:
- localhost/minikube-local-cache-test@sha256:41a09d9db6c658e24f909f8d4077528982add4bd4a6b9b1ee2f4e2f4fe148d72
repoTags:
- localhost/minikube-local-cache-test:functional-660713
size: "3330"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-660713 image ls --format yaml --alsologtostderr:
I1213 19:14:54.887473   65267 out.go:345] Setting OutFile to fd 1 ...
I1213 19:14:54.887759   65267 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:14:54.887769   65267 out.go:358] Setting ErrFile to fd 2...
I1213 19:14:54.887773   65267 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:14:54.888019   65267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
I1213 19:14:54.888627   65267 config.go:182] Loaded profile config "functional-660713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:14:54.888727   65267 config.go:182] Loaded profile config "functional-660713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:14:54.889098   65267 cli_runner.go:164] Run: docker container inspect functional-660713 --format={{.State.Status}}
I1213 19:14:54.907434   65267 ssh_runner.go:195] Run: systemctl --version
I1213 19:14:54.907487   65267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-660713
I1213 19:14:54.925565   65267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/functional-660713/id_rsa Username:docker}
I1213 19:14:55.019884   65267 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660713 ssh pgrep buildkitd: exit status 1 (256.308541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image build -t localhost/my-image:functional-660713 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-660713 image build -t localhost/my-image:functional-660713 testdata/build --alsologtostderr: (2.713985416s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-660713 image build -t localhost/my-image:functional-660713 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e790bc9283d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-660713
--> a9bdbfb7ce6
Successfully tagged localhost/my-image:functional-660713
a9bdbfb7ce66252d25d0380db6427fc79351802234094183931248780f6bf4e3
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-660713 image build -t localhost/my-image:functional-660713 testdata/build --alsologtostderr:
I1213 19:14:55.299903   65512 out.go:345] Setting OutFile to fd 1 ...
I1213 19:14:55.300065   65512 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:14:55.300076   65512 out.go:358] Setting ErrFile to fd 2...
I1213 19:14:55.300082   65512 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:14:55.300362   65512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
I1213 19:14:55.301154   65512 config.go:182] Loaded profile config "functional-660713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:14:55.301795   65512 config.go:182] Loaded profile config "functional-660713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:14:55.302172   65512 cli_runner.go:164] Run: docker container inspect functional-660713 --format={{.State.Status}}
I1213 19:14:55.321287   65512 ssh_runner.go:195] Run: systemctl --version
I1213 19:14:55.321342   65512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-660713
I1213 19:14:55.341644   65512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/functional-660713/id_rsa Username:docker}
I1213 19:14:55.435507   65512 build_images.go:161] Building image from path: /tmp/build.3418185314.tar
I1213 19:14:55.435575   65512 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 19:14:55.444228   65512 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3418185314.tar
I1213 19:14:55.447358   65512 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3418185314.tar: stat -c "%s %y" /var/lib/minikube/build/build.3418185314.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3418185314.tar': No such file or directory
I1213 19:14:55.447390   65512 ssh_runner.go:362] scp /tmp/build.3418185314.tar --> /var/lib/minikube/build/build.3418185314.tar (3072 bytes)
I1213 19:14:55.469470   65512 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3418185314
I1213 19:14:55.477742   65512 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3418185314 -xf /var/lib/minikube/build/build.3418185314.tar
I1213 19:14:55.486445   65512 crio.go:315] Building image: /var/lib/minikube/build/build.3418185314
I1213 19:14:55.486504   65512 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-660713 /var/lib/minikube/build/build.3418185314 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1213 19:14:57.937694   65512 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-660713 /var/lib/minikube/build/build.3418185314 --cgroup-manager=cgroupfs: (2.451167339s)
I1213 19:14:57.937763   65512 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3418185314
I1213 19:14:57.946203   65512 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3418185314.tar
I1213 19:14:57.954405   65512 build_images.go:217] Built localhost/my-image:functional-660713 from /tmp/build.3418185314.tar
I1213 19:14:57.954441   65512 build_images.go:133] succeeded building to: functional-660713
I1213 19:14:57.954448   65512 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.888468417s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-660713
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-660713 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-660713 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c16530bf-b1ba-47b0-a870-b103ffe0ac99] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c16530bf-b1ba-47b0-a870-b103ffe0ac99] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.003476693s
I1213 19:14:28.224704   22695 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image load --daemon kicbase/echo-server:functional-660713 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image load --daemon kicbase/echo-server:functional-660713 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-660713
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image load --daemon kicbase/echo-server:functional-660713 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image save kicbase/echo-server:functional-660713 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image rm kicbase/echo-server:functional-660713 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-660713
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 image save --daemon kicbase/echo-server:functional-660713 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-660713
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-660713 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.230.182 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-660713 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (19.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-660713 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-660713 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-6bmfg" [8fe48e66-d99c-4c81-b7ea-352b08a33fac] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-6bmfg" [8fe48e66-d99c-4c81-b7ea-352b08a33fac] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 19.003445661s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (19.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "312.969485ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "47.909262ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "302.381132ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "47.086581ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-660713 /tmp/TestFunctionalparallelMountCmdany-port2121126167/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1734117288684737875" to /tmp/TestFunctionalparallelMountCmdany-port2121126167/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1734117288684737875" to /tmp/TestFunctionalparallelMountCmdany-port2121126167/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1734117288684737875" to /tmp/TestFunctionalparallelMountCmdany-port2121126167/001/test-1734117288684737875
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660713 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (288.797318ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 19:14:48.973813   22695 retry.go:31] will retry after 721.839626ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 19:14 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 19:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 19:14 test-1734117288684737875
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh cat /mount-9p/test-1734117288684737875
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-660713 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cdce6d73-fed6-40a8-a15a-131d83631897] Pending
helpers_test.go:344: "busybox-mount" [cdce6d73-fed6-40a8-a15a-131d83631897] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [cdce6d73-fed6-40a8-a15a-131d83631897] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [cdce6d73-fed6-40a8-a15a-131d83631897] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.00358955s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-660713 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-660713 /tmp/TestFunctionalparallelMountCmdany-port2121126167/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 service list
functional_test.go:1459: (dbg) Done: out/minikube-linux-amd64 -p functional-660713 service list: (1.727400142s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-660713 service list -o json: (1.684197417s)
functional_test.go:1494: Took "1.684292663s" to run "out/minikube-linux-amd64 -p functional-660713 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30240
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 service hello-node --url
2024/12/13 19:14:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30240
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-660713 /tmp/TestFunctionalparallelMountCmdspecific-port1172172793/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660713 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (261.54039ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 19:15:00.997449   22695 retry.go:31] will retry after 257.924651ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-660713 /tmp/TestFunctionalparallelMountCmdspecific-port1172172793/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660713 ssh "sudo umount -f /mount-9p": exit status 1 (258.502599ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-660713 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-660713 /tmp/TestFunctionalparallelMountCmdspecific-port1172172793/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-660713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3256639312/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-660713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3256639312/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-660713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3256639312/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660713 ssh "findmnt -T" /mount1: exit status 1 (338.192981ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 19:15:02.605148   22695 retry.go:31] will retry after 706.017112ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-660713 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-660713 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-660713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3256639312/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-660713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3256639312/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-660713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3256639312/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-660713
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-660713
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-660713
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (104.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-776090 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1213 19:15:07.754960   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:15:10.317381   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:15:15.439450   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:15:25.680889   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:15:46.162752   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:16:27.125041   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-776090 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m43.604010684s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (104.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-776090 -- rollout status deployment/busybox: (4.268631024s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- exec busybox-7dff88458-7wbbr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- exec busybox-7dff88458-bv7b4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- exec busybox-7dff88458-kqh7t -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- exec busybox-7dff88458-7wbbr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- exec busybox-7dff88458-bv7b4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- exec busybox-7dff88458-kqh7t -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- exec busybox-7dff88458-7wbbr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- exec busybox-7dff88458-bv7b4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- exec busybox-7dff88458-kqh7t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- exec busybox-7dff88458-7wbbr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- exec busybox-7dff88458-7wbbr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- exec busybox-7dff88458-bv7b4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- exec busybox-7dff88458-bv7b4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- exec busybox-7dff88458-kqh7t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-776090 -- exec busybox-7dff88458-kqh7t -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (33.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-776090 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-776090 -v=7 --alsologtostderr: (33.135305806s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (33.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-776090 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp testdata/cp-test.txt ha-776090:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile146821713/001/cp-test_ha-776090.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090:/home/docker/cp-test.txt ha-776090-m02:/home/docker/cp-test_ha-776090_ha-776090-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m02 "sudo cat /home/docker/cp-test_ha-776090_ha-776090-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090:/home/docker/cp-test.txt ha-776090-m03:/home/docker/cp-test_ha-776090_ha-776090-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m03 "sudo cat /home/docker/cp-test_ha-776090_ha-776090-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090:/home/docker/cp-test.txt ha-776090-m04:/home/docker/cp-test_ha-776090_ha-776090-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m04 "sudo cat /home/docker/cp-test_ha-776090_ha-776090-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp testdata/cp-test.txt ha-776090-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile146821713/001/cp-test_ha-776090-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090-m02:/home/docker/cp-test.txt ha-776090:/home/docker/cp-test_ha-776090-m02_ha-776090.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090 "sudo cat /home/docker/cp-test_ha-776090-m02_ha-776090.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090-m02:/home/docker/cp-test.txt ha-776090-m03:/home/docker/cp-test_ha-776090-m02_ha-776090-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m03 "sudo cat /home/docker/cp-test_ha-776090-m02_ha-776090-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090-m02:/home/docker/cp-test.txt ha-776090-m04:/home/docker/cp-test_ha-776090-m02_ha-776090-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m04 "sudo cat /home/docker/cp-test_ha-776090-m02_ha-776090-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp testdata/cp-test.txt ha-776090-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile146821713/001/cp-test_ha-776090-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090-m03:/home/docker/cp-test.txt ha-776090:/home/docker/cp-test_ha-776090-m03_ha-776090.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090 "sudo cat /home/docker/cp-test_ha-776090-m03_ha-776090.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090-m03:/home/docker/cp-test.txt ha-776090-m02:/home/docker/cp-test_ha-776090-m03_ha-776090-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m02 "sudo cat /home/docker/cp-test_ha-776090-m03_ha-776090-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090-m03:/home/docker/cp-test.txt ha-776090-m04:/home/docker/cp-test_ha-776090-m03_ha-776090-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m04 "sudo cat /home/docker/cp-test_ha-776090-m03_ha-776090-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp testdata/cp-test.txt ha-776090-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile146821713/001/cp-test_ha-776090-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090-m04:/home/docker/cp-test.txt ha-776090:/home/docker/cp-test_ha-776090-m04_ha-776090.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090 "sudo cat /home/docker/cp-test_ha-776090-m04_ha-776090.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090-m04:/home/docker/cp-test.txt ha-776090-m02:/home/docker/cp-test_ha-776090-m04_ha-776090-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m04 "sudo cat /home/docker/cp-test.txt"
E1213 19:17:49.046428   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m02 "sudo cat /home/docker/cp-test_ha-776090-m04_ha-776090-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 cp ha-776090-m04:/home/docker/cp-test.txt ha-776090-m03:/home/docker/cp-test_ha-776090-m04_ha-776090-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 ssh -n ha-776090-m03 "sudo cat /home/docker/cp-test_ha-776090-m04_ha-776090-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-776090 node stop m02 -v=7 --alsologtostderr: (11.844229264s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-776090 status -v=7 --alsologtostderr: exit status 7 (667.679238ms)

                                                
                                                
-- stdout --
	ha-776090
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-776090-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-776090-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-776090-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:18:02.155118   88926 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:18:02.155301   88926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:18:02.155314   88926 out.go:358] Setting ErrFile to fd 2...
	I1213 19:18:02.155320   88926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:18:02.155530   88926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
	I1213 19:18:02.155731   88926 out.go:352] Setting JSON to false
	I1213 19:18:02.155762   88926 mustload.go:65] Loading cluster: ha-776090
	I1213 19:18:02.155868   88926 notify.go:220] Checking for updates...
	I1213 19:18:02.156224   88926 config.go:182] Loaded profile config "ha-776090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:18:02.156251   88926 status.go:174] checking status of ha-776090 ...
	I1213 19:18:02.156723   88926 cli_runner.go:164] Run: docker container inspect ha-776090 --format={{.State.Status}}
	I1213 19:18:02.176579   88926 status.go:371] ha-776090 host status = "Running" (err=<nil>)
	I1213 19:18:02.176614   88926 host.go:66] Checking if "ha-776090" exists ...
	I1213 19:18:02.176912   88926 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-776090
	I1213 19:18:02.194937   88926 host.go:66] Checking if "ha-776090" exists ...
	I1213 19:18:02.195169   88926 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:18:02.195221   88926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-776090
	I1213 19:18:02.212485   88926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/ha-776090/id_rsa Username:docker}
	I1213 19:18:02.312596   88926 ssh_runner.go:195] Run: systemctl --version
	I1213 19:18:02.316454   88926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:18:02.326640   88926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:18:02.377346   88926 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-12-13 19:18:02.366487288 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 19:18:02.377875   88926 kubeconfig.go:125] found "ha-776090" server: "https://192.168.49.254:8443"
	I1213 19:18:02.377903   88926 api_server.go:166] Checking apiserver status ...
	I1213 19:18:02.377933   88926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:18:02.388647   88926 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup
	I1213 19:18:02.397338   88926 api_server.go:182] apiserver freezer: "6:freezer:/docker/d699ed9af79e6101f3e824531b805fd238ede680a325f348214d3b1129be8228/crio/crio-99d26bcdeb61c1cd5124d501d97a1cf11f4037a704094768aaa2a790d242592d"
	I1213 19:18:02.397412   88926 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d699ed9af79e6101f3e824531b805fd238ede680a325f348214d3b1129be8228/crio/crio-99d26bcdeb61c1cd5124d501d97a1cf11f4037a704094768aaa2a790d242592d/freezer.state
	I1213 19:18:02.405170   88926 api_server.go:204] freezer state: "THAWED"
	I1213 19:18:02.405199   88926 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 19:18:02.408940   88926 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 19:18:02.408971   88926 status.go:463] ha-776090 apiserver status = Running (err=<nil>)
	I1213 19:18:02.408980   88926 status.go:176] ha-776090 status: &{Name:ha-776090 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:18:02.409002   88926 status.go:174] checking status of ha-776090-m02 ...
	I1213 19:18:02.409314   88926 cli_runner.go:164] Run: docker container inspect ha-776090-m02 --format={{.State.Status}}
	I1213 19:18:02.426691   88926 status.go:371] ha-776090-m02 host status = "Stopped" (err=<nil>)
	I1213 19:18:02.426710   88926 status.go:384] host is not running, skipping remaining checks
	I1213 19:18:02.426716   88926 status.go:176] ha-776090-m02 status: &{Name:ha-776090-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:18:02.426734   88926 status.go:174] checking status of ha-776090-m03 ...
	I1213 19:18:02.427020   88926 cli_runner.go:164] Run: docker container inspect ha-776090-m03 --format={{.State.Status}}
	I1213 19:18:02.444273   88926 status.go:371] ha-776090-m03 host status = "Running" (err=<nil>)
	I1213 19:18:02.444313   88926 host.go:66] Checking if "ha-776090-m03" exists ...
	I1213 19:18:02.444637   88926 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-776090-m03
	I1213 19:18:02.462998   88926 host.go:66] Checking if "ha-776090-m03" exists ...
	I1213 19:18:02.463323   88926 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:18:02.463371   88926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-776090-m03
	I1213 19:18:02.481299   88926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/ha-776090-m03/id_rsa Username:docker}
	I1213 19:18:02.576365   88926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:18:02.586690   88926 kubeconfig.go:125] found "ha-776090" server: "https://192.168.49.254:8443"
	I1213 19:18:02.586721   88926 api_server.go:166] Checking apiserver status ...
	I1213 19:18:02.586763   88926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:18:02.596170   88926 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1431/cgroup
	I1213 19:18:02.604903   88926 api_server.go:182] apiserver freezer: "6:freezer:/docker/0eb3929bbc84fdf0f947efced944312427a7c9f24d2ff3821bebc7f6422a3031/crio/crio-98ad3ed6aeea9e65db874f2a10d1876b9e02ae5862eed851c3d9f6f241527aab"
	I1213 19:18:02.604961   88926 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0eb3929bbc84fdf0f947efced944312427a7c9f24d2ff3821bebc7f6422a3031/crio/crio-98ad3ed6aeea9e65db874f2a10d1876b9e02ae5862eed851c3d9f6f241527aab/freezer.state
	I1213 19:18:02.613209   88926 api_server.go:204] freezer state: "THAWED"
	I1213 19:18:02.613242   88926 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 19:18:02.616850   88926 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 19:18:02.616874   88926 status.go:463] ha-776090-m03 apiserver status = Running (err=<nil>)
	I1213 19:18:02.616885   88926 status.go:176] ha-776090-m03 status: &{Name:ha-776090-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:18:02.616931   88926 status.go:174] checking status of ha-776090-m04 ...
	I1213 19:18:02.617171   88926 cli_runner.go:164] Run: docker container inspect ha-776090-m04 --format={{.State.Status}}
	I1213 19:18:02.635485   88926 status.go:371] ha-776090-m04 host status = "Running" (err=<nil>)
	I1213 19:18:02.635507   88926 host.go:66] Checking if "ha-776090-m04" exists ...
	I1213 19:18:02.635753   88926 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-776090-m04
	I1213 19:18:02.653663   88926 host.go:66] Checking if "ha-776090-m04" exists ...
	I1213 19:18:02.653990   88926 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:18:02.654037   88926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-776090-m04
	I1213 19:18:02.672547   88926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/ha-776090-m04/id_rsa Username:docker}
	I1213 19:18:02.764171   88926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:18:02.775420   88926 status.go:176] ha-776090-m04 status: &{Name:ha-776090-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-776090 node start m02 -v=7 --alsologtostderr: (43.005889919s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (43.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (203.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-776090 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-776090 -v=7 --alsologtostderr
E1213 19:19:16.023924   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:19:16.030460   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:19:16.041897   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:19:16.063361   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:19:16.104842   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:19:16.186429   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:19:16.347972   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:19:16.669706   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:19:17.311813   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:19:18.593457   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:19:21.155469   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-776090 -v=7 --alsologtostderr: (36.67720182s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-776090 --wait=true -v=7 --alsologtostderr
E1213 19:19:26.277677   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:19:36.519520   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:19:57.001899   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:20:05.185955   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:20:32.887926   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:20:37.964189   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:21:59.886030   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-776090 --wait=true -v=7 --alsologtostderr: (2m47.164908358s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-776090
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (203.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-776090 node delete m03 -v=7 --alsologtostderr: (10.593529532s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-776090 stop -v=7 --alsologtostderr: (35.331267635s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-776090 status -v=7 --alsologtostderr: exit status 7 (102.708514ms)

                                                
                                                
-- stdout --
	ha-776090
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-776090-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-776090-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:22:59.620069  107330 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:22:59.620170  107330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:22:59.620178  107330 out.go:358] Setting ErrFile to fd 2...
	I1213 19:22:59.620183  107330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:22:59.620366  107330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
	I1213 19:22:59.620528  107330 out.go:352] Setting JSON to false
	I1213 19:22:59.620550  107330 mustload.go:65] Loading cluster: ha-776090
	I1213 19:22:59.620658  107330 notify.go:220] Checking for updates...
	I1213 19:22:59.620986  107330 config.go:182] Loaded profile config "ha-776090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:22:59.621007  107330 status.go:174] checking status of ha-776090 ...
	I1213 19:22:59.621472  107330 cli_runner.go:164] Run: docker container inspect ha-776090 --format={{.State.Status}}
	I1213 19:22:59.642392  107330 status.go:371] ha-776090 host status = "Stopped" (err=<nil>)
	I1213 19:22:59.642429  107330 status.go:384] host is not running, skipping remaining checks
	I1213 19:22:59.642436  107330 status.go:176] ha-776090 status: &{Name:ha-776090 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:22:59.642468  107330 status.go:174] checking status of ha-776090-m02 ...
	I1213 19:22:59.642726  107330 cli_runner.go:164] Run: docker container inspect ha-776090-m02 --format={{.State.Status}}
	I1213 19:22:59.659530  107330 status.go:371] ha-776090-m02 host status = "Stopped" (err=<nil>)
	I1213 19:22:59.659553  107330 status.go:384] host is not running, skipping remaining checks
	I1213 19:22:59.659559  107330 status.go:176] ha-776090-m02 status: &{Name:ha-776090-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:22:59.659575  107330 status.go:174] checking status of ha-776090-m04 ...
	I1213 19:22:59.659806  107330 cli_runner.go:164] Run: docker container inspect ha-776090-m04 --format={{.State.Status}}
	I1213 19:22:59.675925  107330 status.go:371] ha-776090-m04 host status = "Stopped" (err=<nil>)
	I1213 19:22:59.675949  107330 status.go:384] host is not running, skipping remaining checks
	I1213 19:22:59.675956  107330 status.go:176] ha-776090-m04 status: &{Name:ha-776090-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-776090 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-776090 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (58.084873867s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (47.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-776090 --control-plane -v=7 --alsologtostderr
E1213 19:24:16.024580   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:24:43.727898   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-776090 --control-plane -v=7 --alsologtostderr: (46.524178907s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-776090 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (47.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-709311 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1213 19:25:05.185875   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-709311 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (44.536796158s)
--- PASS: TestJSONOutput/start/Command (44.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-709311 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-709311 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-709311 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-709311 --output=json --user=testUser: (5.736566705s)
--- PASS: TestJSONOutput/stop/Command (5.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-331362 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-331362 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (67.540014ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ae968bce-90ee-4a19-9e8a-5acb77f7c390","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-331362] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"02f785fc-4b81-4e19-94d4-96976b353f9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20090"}}
	{"specversion":"1.0","id":"8a6c975f-e388-4824-aa38-c0ecc5face38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8ed22d8e-d85c-4ea6-9f15-63863d736fd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20090-15903/kubeconfig"}}
	{"specversion":"1.0","id":"17f64497-a310-41f3-98e5-5e644b45baaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-15903/.minikube"}}
	{"specversion":"1.0","id":"1d575049-2b50-48a3-974e-a1bd9985193d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"db407afc-53ea-42d7-bc0a-654ef51adf55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0553376e-5358-4073-8387-665beb4ef489","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-331362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-331362
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-276673 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-276673 --network=: (33.7790235s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-276673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-276673
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-276673: (2.025082019s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.82s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.89s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-234919 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-234919 --network=bridge: (21.035416648s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-234919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-234919
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-234919: (1.83308711s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.89s)

                                                
                                    
x
+
TestKicExistingNetwork (25.85s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1213 19:26:50.030429   22695 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1213 19:26:50.048589   22695 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1213 19:26:50.048664   22695 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1213 19:26:50.048682   22695 cli_runner.go:164] Run: docker network inspect existing-network
W1213 19:26:50.065501   22695 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1213 19:26:50.065535   22695 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1213 19:26:50.065553   22695 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1213 19:26:50.065727   22695 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 19:26:50.084737   22695 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c239c3fa5e4d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:3b:0d:70:f0} reservation:<nil>}
I1213 19:26:50.085223   22695 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00212b150}
I1213 19:26:50.085253   22695 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1213 19:26:50.085300   22695 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1213 19:26:50.148834   22695 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-863332 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-863332 --network=existing-network: (24.177210699s)
helpers_test.go:175: Cleaning up "existing-network-863332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-863332
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-863332: (1.5163109s)
I1213 19:27:15.860060   22695 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.85s)

                                                
                                    
x
+
TestKicCustomSubnet (23.8s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-263083 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-263083 --subnet=192.168.60.0/24: (21.711736391s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-263083 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-263083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-263083
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-263083: (2.072201569s)
--- PASS: TestKicCustomSubnet (23.80s)

                                                
                                    
x
+
TestKicStaticIP (26.59s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-474499 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-474499 --static-ip=192.168.200.200: (24.426249355s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-474499 ip
helpers_test.go:175: Cleaning up "static-ip-474499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-474499
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-474499: (2.04480951s)
--- PASS: TestKicStaticIP (26.59s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (45.74s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-566316 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-566316 --driver=docker  --container-runtime=crio: (20.273461592s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-576593 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-576593 --driver=docker  --container-runtime=crio: (20.183757944s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-566316
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-576593
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-576593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-576593
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-576593: (1.856723413s)
helpers_test.go:175: Cleaning up "first-566316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-566316
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-566316: (2.257199996s)
--- PASS: TestMinikubeProfile (45.74s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-147300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-147300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.404321072s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-147300 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-157323 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-157323 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.208699307s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-157323 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-147300 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-147300 --alsologtostderr -v=5: (1.607520706s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-157323 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-157323
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-157323: (1.175775752s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-157323
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-157323: (6.948239076s)
E1213 19:29:16.023902   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/RestartStopped (7.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-157323 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-846698 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1213 19:30:05.186338   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-846698 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m9.972950909s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-846698 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-846698 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-846698 -- rollout status deployment/busybox: (3.954721328s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-846698 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-846698 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-846698 -- exec busybox-7dff88458-prz5q -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-846698 -- exec busybox-7dff88458-rpkgs -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-846698 -- exec busybox-7dff88458-prz5q -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-846698 -- exec busybox-7dff88458-rpkgs -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-846698 -- exec busybox-7dff88458-prz5q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-846698 -- exec busybox-7dff88458-rpkgs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-846698 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-846698 -- exec busybox-7dff88458-prz5q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-846698 -- exec busybox-7dff88458-prz5q -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-846698 -- exec busybox-7dff88458-rpkgs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-846698 -- exec busybox-7dff88458-rpkgs -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (32.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-846698 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-846698 -v 3 --alsologtostderr: (31.792541461s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (32.41s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-846698 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 cp testdata/cp-test.txt multinode-846698:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 cp multinode-846698:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1197598474/001/cp-test_multinode-846698.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 cp multinode-846698:/home/docker/cp-test.txt multinode-846698-m02:/home/docker/cp-test_multinode-846698_multinode-846698-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698-m02 "sudo cat /home/docker/cp-test_multinode-846698_multinode-846698-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 cp multinode-846698:/home/docker/cp-test.txt multinode-846698-m03:/home/docker/cp-test_multinode-846698_multinode-846698-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698-m03 "sudo cat /home/docker/cp-test_multinode-846698_multinode-846698-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 cp testdata/cp-test.txt multinode-846698-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 cp multinode-846698-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1197598474/001/cp-test_multinode-846698-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 cp multinode-846698-m02:/home/docker/cp-test.txt multinode-846698:/home/docker/cp-test_multinode-846698-m02_multinode-846698.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698 "sudo cat /home/docker/cp-test_multinode-846698-m02_multinode-846698.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 cp multinode-846698-m02:/home/docker/cp-test.txt multinode-846698-m03:/home/docker/cp-test_multinode-846698-m02_multinode-846698-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698-m03 "sudo cat /home/docker/cp-test_multinode-846698-m02_multinode-846698-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 cp testdata/cp-test.txt multinode-846698-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 cp multinode-846698-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1197598474/001/cp-test_multinode-846698-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 cp multinode-846698-m03:/home/docker/cp-test.txt multinode-846698:/home/docker/cp-test_multinode-846698-m03_multinode-846698.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698 "sudo cat /home/docker/cp-test_multinode-846698-m03_multinode-846698.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 cp multinode-846698-m03:/home/docker/cp-test.txt multinode-846698-m02:/home/docker/cp-test_multinode-846698-m03_multinode-846698-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 ssh -n multinode-846698-m02 "sudo cat /home/docker/cp-test_multinode-846698-m03_multinode-846698-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-846698 node stop m03: (1.181930379s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-846698 status: exit status 7 (475.998629ms)

                                                
                                                
-- stdout --
	multinode-846698
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-846698-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-846698-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-846698 status --alsologtostderr: exit status 7 (464.224804ms)

                                                
                                                
-- stdout --
	multinode-846698
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-846698-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-846698-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:31:18.679923  173109 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:31:18.680023  173109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:31:18.680038  173109 out.go:358] Setting ErrFile to fd 2...
	I1213 19:31:18.680044  173109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:31:18.680202  173109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
	I1213 19:31:18.680377  173109 out.go:352] Setting JSON to false
	I1213 19:31:18.680399  173109 mustload.go:65] Loading cluster: multinode-846698
	I1213 19:31:18.680450  173109 notify.go:220] Checking for updates...
	I1213 19:31:18.680769  173109 config.go:182] Loaded profile config "multinode-846698": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:31:18.680788  173109 status.go:174] checking status of multinode-846698 ...
	I1213 19:31:18.681166  173109 cli_runner.go:164] Run: docker container inspect multinode-846698 --format={{.State.Status}}
	I1213 19:31:18.700681  173109 status.go:371] multinode-846698 host status = "Running" (err=<nil>)
	I1213 19:31:18.700716  173109 host.go:66] Checking if "multinode-846698" exists ...
	I1213 19:31:18.700941  173109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-846698
	I1213 19:31:18.717973  173109 host.go:66] Checking if "multinode-846698" exists ...
	I1213 19:31:18.718231  173109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:31:18.718283  173109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-846698
	I1213 19:31:18.735880  173109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32904 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/multinode-846698/id_rsa Username:docker}
	I1213 19:31:18.828341  173109 ssh_runner.go:195] Run: systemctl --version
	I1213 19:31:18.832263  173109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:31:18.843364  173109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:31:18.890109  173109 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-12-13 19:31:18.881436671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 19:31:18.890764  173109 kubeconfig.go:125] found "multinode-846698" server: "https://192.168.67.2:8443"
	I1213 19:31:18.890790  173109 api_server.go:166] Checking apiserver status ...
	I1213 19:31:18.890827  173109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:31:18.900899  173109 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1505/cgroup
	I1213 19:31:18.909672  173109 api_server.go:182] apiserver freezer: "6:freezer:/docker/8513136ba3a3b1974320fcefb3776cedfc0dddd10a4b98d87d9f6433d6ecbac6/crio/crio-183caf83b2c9049dab469fcb50498f29c1ccfb4418cd69e5022883034477e289"
	I1213 19:31:18.909749  173109 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8513136ba3a3b1974320fcefb3776cedfc0dddd10a4b98d87d9f6433d6ecbac6/crio/crio-183caf83b2c9049dab469fcb50498f29c1ccfb4418cd69e5022883034477e289/freezer.state
	I1213 19:31:18.917624  173109 api_server.go:204] freezer state: "THAWED"
	I1213 19:31:18.917654  173109 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1213 19:31:18.921291  173109 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1213 19:31:18.921318  173109 status.go:463] multinode-846698 apiserver status = Running (err=<nil>)
	I1213 19:31:18.921337  173109 status.go:176] multinode-846698 status: &{Name:multinode-846698 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:31:18.921364  173109 status.go:174] checking status of multinode-846698-m02 ...
	I1213 19:31:18.921653  173109 cli_runner.go:164] Run: docker container inspect multinode-846698-m02 --format={{.State.Status}}
	I1213 19:31:18.939218  173109 status.go:371] multinode-846698-m02 host status = "Running" (err=<nil>)
	I1213 19:31:18.939243  173109 host.go:66] Checking if "multinode-846698-m02" exists ...
	I1213 19:31:18.939517  173109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-846698-m02
	I1213 19:31:18.957459  173109 host.go:66] Checking if "multinode-846698-m02" exists ...
	I1213 19:31:18.957765  173109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:31:18.957813  173109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-846698-m02
	I1213 19:31:18.975628  173109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/20090-15903/.minikube/machines/multinode-846698-m02/id_rsa Username:docker}
	I1213 19:31:19.068251  173109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:31:19.078659  173109 status.go:176] multinode-846698-m02 status: &{Name:multinode-846698-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:31:19.078707  173109 status.go:174] checking status of multinode-846698-m03 ...
	I1213 19:31:19.078988  173109 cli_runner.go:164] Run: docker container inspect multinode-846698-m03 --format={{.State.Status}}
	I1213 19:31:19.096681  173109 status.go:371] multinode-846698-m03 host status = "Stopped" (err=<nil>)
	I1213 19:31:19.096706  173109 status.go:384] host is not running, skipping remaining checks
	I1213 19:31:19.096716  173109 status.go:176] multinode-846698-m03 status: &{Name:multinode-846698-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-846698 node start m03 -v=7 --alsologtostderr: (8.239467726s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-846698
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-846698
E1213 19:31:28.249328   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-846698: (24.679674928s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-846698 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-846698 --wait=true -v=8 --alsologtostderr: (54.808590616s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-846698
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-846698 node delete m03: (4.411826357s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-846698 stop: (23.54227556s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-846698 status: exit status 7 (86.41335ms)

                                                
                                                
-- stdout --
	multinode-846698
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-846698-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-846698 status --alsologtostderr: exit status 7 (87.481484ms)

                                                
                                                
-- stdout --
	multinode-846698
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-846698-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:33:16.253929  182398 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:33:16.254048  182398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:33:16.254054  182398 out.go:358] Setting ErrFile to fd 2...
	I1213 19:33:16.254058  182398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:33:16.254244  182398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
	I1213 19:33:16.254426  182398 out.go:352] Setting JSON to false
	I1213 19:33:16.254451  182398 mustload.go:65] Loading cluster: multinode-846698
	I1213 19:33:16.254505  182398 notify.go:220] Checking for updates...
	I1213 19:33:16.255010  182398 config.go:182] Loaded profile config "multinode-846698": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:33:16.255037  182398 status.go:174] checking status of multinode-846698 ...
	I1213 19:33:16.255712  182398 cli_runner.go:164] Run: docker container inspect multinode-846698 --format={{.State.Status}}
	I1213 19:33:16.273328  182398 status.go:371] multinode-846698 host status = "Stopped" (err=<nil>)
	I1213 19:33:16.273382  182398 status.go:384] host is not running, skipping remaining checks
	I1213 19:33:16.273396  182398 status.go:176] multinode-846698 status: &{Name:multinode-846698 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:33:16.273451  182398 status.go:174] checking status of multinode-846698-m02 ...
	I1213 19:33:16.273832  182398 cli_runner.go:164] Run: docker container inspect multinode-846698-m02 --format={{.State.Status}}
	I1213 19:33:16.293444  182398 status.go:371] multinode-846698-m02 host status = "Stopped" (err=<nil>)
	I1213 19:33:16.293471  182398 status.go:384] host is not running, skipping remaining checks
	I1213 19:33:16.293477  182398 status.go:176] multinode-846698-m02 status: &{Name:multinode-846698-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-846698 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-846698 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (52.601340712s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-846698 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.19s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-846698
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-846698-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-846698-m02 --driver=docker  --container-runtime=crio: exit status 14 (67.852381ms)

                                                
                                                
-- stdout --
	* [multinode-846698-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-15903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-15903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-846698-m02' is duplicated with machine name 'multinode-846698-m02' in profile 'multinode-846698'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-846698-m03 --driver=docker  --container-runtime=crio
E1213 19:34:16.024712   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-846698-m03 --driver=docker  --container-runtime=crio: (20.412605683s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-846698
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-846698: exit status 80 (267.620956ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-846698 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-846698-m03 already exists in multinode-846698-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-846698-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-846698-m03: (1.834406778s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.63s)

                                                
                                    
x
+
TestPreload (118.69s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-744658 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1213 19:35:05.185894   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:35:39.090762   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-744658 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m19.974431412s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-744658 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-744658 image pull gcr.io/k8s-minikube/busybox: (3.059314858s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-744658
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-744658: (5.663638756s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-744658 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-744658 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (27.529390809s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-744658 image list
helpers_test.go:175: Cleaning up "test-preload-744658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-744658
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-744658: (2.236362155s)
--- PASS: TestPreload (118.69s)

                                                
                                    
x
+
TestScheduledStopUnix (96.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-490270 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-490270 --memory=2048 --driver=docker  --container-runtime=crio: (20.200697861s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-490270 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-490270 -n scheduled-stop-490270
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-490270 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1213 19:36:55.306627   22695 retry.go:31] will retry after 148.846µs: open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/scheduled-stop-490270/pid: no such file or directory
I1213 19:36:55.307802   22695 retry.go:31] will retry after 79.165µs: open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/scheduled-stop-490270/pid: no such file or directory
I1213 19:36:55.308957   22695 retry.go:31] will retry after 337.334µs: open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/scheduled-stop-490270/pid: no such file or directory
I1213 19:36:55.310098   22695 retry.go:31] will retry after 201.779µs: open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/scheduled-stop-490270/pid: no such file or directory
I1213 19:36:55.311224   22695 retry.go:31] will retry after 624.287µs: open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/scheduled-stop-490270/pid: no such file or directory
I1213 19:36:55.312351   22695 retry.go:31] will retry after 919.579µs: open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/scheduled-stop-490270/pid: no such file or directory
I1213 19:36:55.313473   22695 retry.go:31] will retry after 856.081µs: open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/scheduled-stop-490270/pid: no such file or directory
I1213 19:36:55.314619   22695 retry.go:31] will retry after 2.519094ms: open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/scheduled-stop-490270/pid: no such file or directory
I1213 19:36:55.317825   22695 retry.go:31] will retry after 1.832561ms: open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/scheduled-stop-490270/pid: no such file or directory
I1213 19:36:55.320035   22695 retry.go:31] will retry after 2.558071ms: open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/scheduled-stop-490270/pid: no such file or directory
I1213 19:36:55.323259   22695 retry.go:31] will retry after 5.081017ms: open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/scheduled-stop-490270/pid: no such file or directory
I1213 19:36:55.328446   22695 retry.go:31] will retry after 7.060726ms: open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/scheduled-stop-490270/pid: no such file or directory
I1213 19:36:55.335610   22695 retry.go:31] will retry after 16.143666ms: open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/scheduled-stop-490270/pid: no such file or directory
I1213 19:36:55.352901   22695 retry.go:31] will retry after 22.747147ms: open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/scheduled-stop-490270/pid: no such file or directory
I1213 19:36:55.376191   22695 retry.go:31] will retry after 27.15914ms: open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/scheduled-stop-490270/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-490270 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-490270 -n scheduled-stop-490270
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-490270
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-490270 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-490270
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-490270: exit status 7 (67.469524ms)

                                                
                                                
-- stdout --
	scheduled-stop-490270
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-490270 -n scheduled-stop-490270
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-490270 -n scheduled-stop-490270: exit status 7 (65.466568ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-490270" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-490270
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-490270: (4.686575133s)
--- PASS: TestScheduledStopUnix (96.22s)

                                                
                                    
x
+
TestInsufficientStorage (12.4s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-942258 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-942258 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.059587391s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c1125e9f-3405-4b67-8c98-d7d82fd2a031","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-942258] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"923444ba-d287-4052-bd25-510fb80ad749","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20090"}}
	{"specversion":"1.0","id":"396ee233-6467-4b58-9761-19720d8a1112","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"05dcd51c-7f0e-4e39-b61b-2c4325023ce2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20090-15903/kubeconfig"}}
	{"specversion":"1.0","id":"a6e0c345-5d27-4d45-b684-1cebadaea4e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-15903/.minikube"}}
	{"specversion":"1.0","id":"324720df-c1d4-4002-9bbe-4beb2537e0f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"04a6294c-28d0-401d-b477-476fcce52f69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0d5d92de-2588-4d8e-8c69-018b69c9fb73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f19f22aa-685d-4ba4-ae1c-9a3d7f5110af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8c1cc74f-5246-4693-85b8-556da55d6f3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f1657d95-9de6-4b92-8a42-56f897d7479d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"436c138b-0955-4419-9d3c-4f9fb298e66d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-942258\" primary control-plane node in \"insufficient-storage-942258\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f240ba15-b31b-4020-b653-04bd10cc1796","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1734029593-20090 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b4085112-c669-43ee-8a97-b643b3878eb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8487bc2a-1b54-4a2e-b243-ef1cf66f35d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-942258 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-942258 --output=json --layout=cluster: exit status 7 (267.109801ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-942258","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-942258","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 19:38:21.232637  204954 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-942258" does not appear in /home/jenkins/minikube-integration/20090-15903/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-942258 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-942258 --output=json --layout=cluster: exit status 7 (260.255343ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-942258","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-942258","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 19:38:21.493455  205053 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-942258" does not appear in /home/jenkins/minikube-integration/20090-15903/kubeconfig
	E1213 19:38:21.502774  205053 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/insufficient-storage-942258/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-942258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-942258
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-942258: (1.807301165s)
--- PASS: TestInsufficientStorage (12.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (150.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2863816968 start -p running-upgrade-318447 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2863816968 start -p running-upgrade-318447 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m43.250289628s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-318447 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-318447 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.835876615s)
helpers_test.go:175: Cleaning up "running-upgrade-318447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-318447
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-318447: (4.923813112s)
--- PASS: TestRunningBinaryUpgrade (150.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (346.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-500107 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-500107 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.632131743s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-500107
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-500107: (1.789667122s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-500107 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-500107 status --format={{.Host}}: exit status 7 (69.298465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-500107 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-500107 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.895781665s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-500107 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-500107 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-500107 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (69.161401ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-500107] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-15903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-15903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-500107
	    minikube start -p kubernetes-upgrade-500107 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5001072 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-500107 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-500107 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-500107 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.575912029s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-500107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-500107
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-500107: (2.245788324s)
--- PASS: TestKubernetesUpgrade (346.34s)

                                                
                                    
x
+
TestMissingContainerUpgrade (116.99s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1896221638 start -p missing-upgrade-454811 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1896221638 start -p missing-upgrade-454811 --memory=2200 --driver=docker  --container-runtime=crio: (42.781221839s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-454811
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-454811: (17.91694196s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-454811
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-454811 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-454811 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.049175208s)
helpers_test.go:175: Cleaning up "missing-upgrade-454811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-454811
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-454811: (3.974786737s)
--- PASS: TestMissingContainerUpgrade (116.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-274167 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-274167 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (80.05702ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-274167] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-15903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-15903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-274167 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-274167 --driver=docker  --container-runtime=crio: (29.190454976s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-274167 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (130.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1984991175 start -p stopped-upgrade-383308 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1984991175 start -p stopped-upgrade-383308 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m41.190146964s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1984991175 -p stopped-upgrade-383308 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1984991175 -p stopped-upgrade-383308 stop: (2.648609665s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-383308 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-383308 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.74434456s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (130.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-274167 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-274167 --no-kubernetes --driver=docker  --container-runtime=crio: (5.253934628s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-274167 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-274167 status -o json: exit status 2 (272.015654ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-274167","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-274167
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-274167: (1.833841543s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (13.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-274167 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-274167 --no-kubernetes --driver=docker  --container-runtime=crio: (13.148551538s)
--- PASS: TestNoKubernetes/serial/Start (13.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-274167 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-274167 "sudo systemctl is-active --quiet service kubelet": exit status 1 (333.980524ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.79s)

                                                
                                    
x
+
TestPause/serial/Start (48.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-522535 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-522535 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (48.494056659s)
--- PASS: TestPause/serial/Start (48.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-274167
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-274167: (1.244544109s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-274167 --driver=docker  --container-runtime=crio
E1213 19:39:16.024258   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-274167 --driver=docker  --container-runtime=crio: (7.305743732s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-274167 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-274167 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.8487ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.58s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-522535 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1213 19:40:05.185900   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-522535 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.563199443s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-383308
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-522535 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-522535 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-522535 --output=json --layout=cluster: exit status 2 (369.15335ms)

                                                
                                                
-- stdout --
	{"Name":"pause-522535","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-522535","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-522535 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-522535 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.46s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-522535 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-522535 --alsologtostderr -v=5: (4.460552743s)
--- PASS: TestPause/serial/DeletePaused (4.46s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-522535
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-522535: exit status 1 (21.006712ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-522535: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (7.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-573269 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-573269 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (189.629888ms)

                                                
                                                
-- stdout --
	* [false-573269] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-15903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-15903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:40:59.091400  242927 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:40:59.091543  242927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:40:59.091553  242927 out.go:358] Setting ErrFile to fd 2...
	I1213 19:40:59.091557  242927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:40:59.091760  242927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-15903/.minikube/bin
	I1213 19:40:59.092373  242927 out.go:352] Setting JSON to false
	I1213 19:40:59.093535  242927 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":5003,"bootTime":1734113856,"procs":272,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 19:40:59.093635  242927 start.go:139] virtualization: kvm guest
	I1213 19:40:59.097238  242927 out.go:177] * [false-573269] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 19:40:59.099061  242927 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 19:40:59.099084  242927 notify.go:220] Checking for updates...
	I1213 19:40:59.102767  242927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:40:59.104671  242927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-15903/kubeconfig
	I1213 19:40:59.106355  242927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-15903/.minikube
	I1213 19:40:59.108005  242927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 19:40:59.109968  242927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:40:59.112194  242927 config.go:182] Loaded profile config "force-systemd-env-277542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:40:59.112362  242927 config.go:182] Loaded profile config "kubernetes-upgrade-500107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1213 19:40:59.112499  242927 config.go:182] Loaded profile config "missing-upgrade-454811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1213 19:40:59.112621  242927 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:40:59.144829  242927 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1213 19:40:59.144959  242927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:40:59.205638  242927 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:75 SystemTime:2024-12-13 19:40:59.192913384 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 19:40:59.205822  242927 docker.go:318] overlay module found
	I1213 19:40:59.210101  242927 out.go:177] * Using the docker driver based on user configuration
	I1213 19:40:59.212025  242927 start.go:297] selected driver: docker
	I1213 19:40:59.212045  242927 start.go:901] validating driver "docker" against <nil>
	I1213 19:40:59.212058  242927 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:40:59.214985  242927 out.go:201] 
	W1213 19:40:59.216401  242927 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1213 19:40:59.217876  242927 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-573269 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-573269

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-573269

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-573269

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-573269

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-573269

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-573269

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-573269

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-573269

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-573269

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-573269

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-573269

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-573269" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-573269" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20090-15903/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 13 Dec 2024 19:40:08 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-454811
contexts:
- context:
cluster: missing-upgrade-454811
extensions:
- extension:
last-update: Fri, 13 Dec 2024 19:40:08 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-454811
name: missing-upgrade-454811
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-454811
user:
client-certificate: /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/missing-upgrade-454811/client.crt
client-key: /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/missing-upgrade-454811/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-573269

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573269"

                                                
                                                
----------------------- debugLogs end: false-573269 [took: 7.383665719s] --------------------------------
helpers_test.go:175: Cleaning up "false-573269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-573269
--- PASS: TestNetworkPlugins/group/false (7.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (119.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-026428 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-026428 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (1m59.482816034s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (119.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-118174 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-118174 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (57.301043177s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-118174 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b6417a21-d121-4f56-9fa8-17cf2f4308c0] Pending
helpers_test.go:344: "busybox" [b6417a21-d121-4f56-9fa8-17cf2f4308c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b6417a21-d121-4f56-9fa8-17cf2f4308c0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003639047s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-118174 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-118174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-118174 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-118174 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-118174 --alsologtostderr -v=3: (11.850699506s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-118174 -n no-preload-118174
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-118174 -n no-preload-118174: exit status 7 (81.381651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-118174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-118174 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-118174 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m22.315179116s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-118174 -n no-preload-118174
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-026428 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [54ceb061-d66e-433e-83fe-88412f61ae06] Pending
helpers_test.go:344: "busybox" [54ceb061-d66e-433e-83fe-88412f61ae06] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [54ceb061-d66e-433e-83fe-88412f61ae06] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003749831s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-026428 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-026428 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-026428 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-026428 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-026428 --alsologtostderr -v=3: (11.838265738s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-026428 -n old-k8s-version-026428
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-026428 -n old-k8s-version-026428: exit status 7 (68.977674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-026428 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (126.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-026428 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1213 19:44:16.023958   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-026428 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m6.411165076s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-026428 -n old-k8s-version-026428
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (126.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-670476 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1213 19:45:05.186004   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-670476 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (46.734084194s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-670476 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d5d39853-6bdc-4d26-8fad-85a336e12659] Pending
helpers_test.go:344: "busybox" [d5d39853-6bdc-4d26-8fad-85a336e12659] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d5d39853-6bdc-4d26-8fad-85a336e12659] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003705268s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-670476 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-670476 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-670476 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-670476 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-670476 --alsologtostderr -v=3: (11.887307535s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-670476 -n embed-certs-670476
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-670476 -n embed-certs-670476: exit status 7 (66.877839ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-670476 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-670476 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-670476 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m21.844570816s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-670476 -n embed-certs-670476
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9tzl5" [e158e05d-be8c-4dfc-a9f2-a0e1beda65ee] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004282201s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-956372 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-956372 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (47.002187283s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (47.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9tzl5" [e158e05d-be8c-4dfc-a9f2-a0e1beda65ee] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003550465s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-026428 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-026428 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-026428 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-026428 -n old-k8s-version-026428
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-026428 -n old-k8s-version-026428: exit status 2 (301.50176ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-026428 -n old-k8s-version-026428
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-026428 -n old-k8s-version-026428: exit status 2 (336.825948ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-026428 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-026428 -n old-k8s-version-026428
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-026428 -n old-k8s-version-026428
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-073807 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-073807 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (28.983291026s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-073807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-073807 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-073807 --alsologtostderr -v=3: (1.19144219s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-073807 -n newest-cni-073807
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-073807 -n newest-cni-073807: exit status 7 (68.992982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-073807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-073807 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-073807 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (12.593879745s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-073807 -n newest-cni-073807
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-956372 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c7c56f1f-52de-4217-bc0f-aba85110a3ad] Pending
helpers_test.go:344: "busybox" [c7c56f1f-52de-4217-bc0f-aba85110a3ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c7c56f1f-52de-4217-bc0f-aba85110a3ad] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004269167s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-956372 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-073807 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-073807 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-073807 -n newest-cni-073807
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-073807 -n newest-cni-073807: exit status 2 (314.300673ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-073807 -n newest-cni-073807
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-073807 -n newest-cni-073807: exit status 2 (306.111624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-073807 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-073807 -n newest-cni-073807
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-073807 -n newest-cni-073807
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-956372 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-956372 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-956372 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-956372 --alsologtostderr -v=3: (11.974513888s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-573269 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-573269 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.648587147s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fwxlq" [52d7782a-6984-4c20-8a6c-58a6b7ceb38e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003312347s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-956372 -n default-k8s-diff-port-956372
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-956372 -n default-k8s-diff-port-956372: exit status 7 (81.671159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-956372 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-956372 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-956372 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m22.757826627s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-956372 -n default-k8s-diff-port-956372
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fwxlq" [52d7782a-6984-4c20-8a6c-58a6b7ceb38e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004183606s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-118174 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-118174 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-118174 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-118174 -n no-preload-118174
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-118174 -n no-preload-118174: exit status 2 (352.163133ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-118174 -n no-preload-118174
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-118174 -n no-preload-118174: exit status 2 (358.031466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-118174 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-118174 -n no-preload-118174
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-118174 -n no-preload-118174
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (44.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-573269 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1213 19:48:08.251686   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-573269 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (44.492809257s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (44.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-573269 "pgrep -a kubelet"
I1213 19:48:10.721501   22695 config.go:182] Loaded profile config "auto-573269": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-573269 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hq5ts" [7dd7f235-c088-4595-b5b4-6f425836a6ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hq5ts" [7dd7f235-c088-4595-b5b4-6f425836a6ae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004647693s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-573269 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-573269 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-573269 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-s57nr" [190175f3-8e3a-448b-9ee9-05a1aa01e44b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004175609s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-573269 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-573269 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (58.803975657s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-573269 "pgrep -a kubelet"
I1213 19:48:43.975434   22695 config.go:182] Loaded profile config "kindnet-573269": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-573269 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-clf6f" [bbf38269-a03c-45bf-9f62-bd0bd4c8db98] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-clf6f" [bbf38269-a03c-45bf-9f62-bd0bd4c8db98] Running
E1213 19:48:52.164400   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/old-k8s-version-026428/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:48:52.170804   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/old-k8s-version-026428/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004320272s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-573269 exec deployment/netcat -- nslookup kubernetes.default
E1213 19:48:52.182790   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/old-k8s-version-026428/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:48:52.204240   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/old-k8s-version-026428/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:48:52.245622   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/old-k8s-version-026428/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-573269 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1213 19:48:52.327826   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/old-k8s-version-026428/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-573269 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1213 19:48:52.489936   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/old-k8s-version-026428/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (48.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-573269 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1213 19:49:12.660055   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/old-k8s-version-026428/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:49:16.024026   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/functional-660713/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:49:33.141824   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/old-k8s-version-026428/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-573269 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (48.789522231s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (48.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vlp2n" [25ae3897-68da-4d44-8e14-6dfa4157ff73] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003951028s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-573269 "pgrep -a kubelet"
I1213 19:49:43.439326   22695 config.go:182] Loaded profile config "calico-573269": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-573269 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gpv94" [2bf1f8bd-6c1d-4738-b3ad-0ffafb3f1dad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gpv94" [2bf1f8bd-6c1d-4738-b3ad-0ffafb3f1dad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.0040891s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-573269 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-573269 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-573269 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-573269 "pgrep -a kubelet"
I1213 19:50:00.900746   22695 config.go:182] Loaded profile config "custom-flannel-573269": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-573269 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b8nwp" [1e1fc5cf-20e7-4cfa-bf6b-42fc3c822191] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-b8nwp" [1e1fc5cf-20e7-4cfa-bf6b-42fc3c822191] Running
E1213 19:50:05.185517   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/addons-237678/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003817922s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-573269 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-573269 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-573269 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-573269 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-573269 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (57.051155918s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-p8s25" [64d96341-b1cd-45b8-a917-1a6df33d1a3a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004722718s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-573269 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-573269 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (38.739479739s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-p8s25" [64d96341-b1cd-45b8-a917-1a6df33d1a3a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003539928s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-670476 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-670476 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-670476 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-670476 -n embed-certs-670476
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-670476 -n embed-certs-670476: exit status 2 (326.355899ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-670476 -n embed-certs-670476
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-670476 -n embed-certs-670476: exit status 2 (314.074823ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-670476 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-670476 -n embed-certs-670476
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-670476 -n embed-certs-670476
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-573269 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-573269 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m6.670313223s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-573269 "pgrep -a kubelet"
I1213 19:51:10.590510   22695 config.go:182] Loaded profile config "bridge-573269": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-573269 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pq8nd" [6afca17a-4b24-41fa-a521-1fd35b1a41bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pq8nd" [6afca17a-4b24-41fa-a521-1fd35b1a41bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004249399s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-f7xtz" [9246d95e-7c17-4679-8436-641e3df93d54] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003987715s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-573269 "pgrep -a kubelet"
I1213 19:51:17.500718   22695 config.go:182] Loaded profile config "flannel-573269": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-573269 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lhtnj" [26a1a344-f5d6-413e-8d18-5f3e03227102] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lhtnj" [26a1a344-f5d6-413e-8d18-5f3e03227102] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004360175s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (16.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-573269 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-573269 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.124256346s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 19:51:35.895439   22695 retry.go:31] will retry after 856.763701ms: exit status 1
E1213 19:51:36.025167   22695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/old-k8s-version-026428/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Run:  kubectl --context bridge-573269 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (16.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-573269 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-573269 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-573269 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-573269 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-573269 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-573269 "pgrep -a kubelet"
I1213 19:51:53.072555   22695 config.go:182] Loaded profile config "enable-default-cni-573269": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-573269 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jbl5w" [eed6202e-5dbd-4370-af9e-036f4e17151c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jbl5w" [eed6202e-5dbd-4370-af9e-036f4e17151c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003871926s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hg7wt" [47a263c4-7bae-4d84-a8e1-d277ba57d6e7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003900891s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-573269 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-573269 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-573269 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hg7wt" [47a263c4-7bae-4d84-a8e1-d277ba57d6e7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004738186s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-956372 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-956372 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-956372 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-956372 -n default-k8s-diff-port-956372
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-956372 -n default-k8s-diff-port-956372: exit status 2 (304.939138ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-956372 -n default-k8s-diff-port-956372
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-956372 -n default-k8s-diff-port-956372: exit status 2 (320.127714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-956372 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-956372 -n default-k8s-diff-port-956372
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-956372 -n default-k8s-diff-port-956372
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.75s)

                                                
                                    

Test skip (26/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-237678 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-412025" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-412025
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-573269 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-573269

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-573269

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-573269

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-573269

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-573269

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-573269

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-573269

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-573269

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-573269

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-573269

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-573269

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-573269" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-573269" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20090-15903/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 13 Dec 2024 19:40:08 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-454811
contexts:
- context:
cluster: missing-upgrade-454811
extensions:
- extension:
last-update: Fri, 13 Dec 2024 19:40:08 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-454811
name: missing-upgrade-454811
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-454811
user:
client-certificate: /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/missing-upgrade-454811/client.crt
client-key: /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/missing-upgrade-454811/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-573269

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573269"

                                                
                                                
----------------------- debugLogs end: kubenet-573269 [took: 4.793439998s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-573269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-573269
--- SKIP: TestNetworkPlugins/group/kubenet (4.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-573269 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-573269" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-573269" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-573269" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20090-15903/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 13 Dec 2024 19:40:08 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-454811
contexts:
- context:
cluster: missing-upgrade-454811
extensions:
- extension:
last-update: Fri, 13 Dec 2024 19:40:08 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-454811
name: missing-upgrade-454811
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-454811
user:
client-certificate: /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/missing-upgrade-454811/client.crt
client-key: /home/jenkins/minikube-integration/20090-15903/.minikube/profiles/missing-upgrade-454811/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-573269

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-573269" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573269"

                                                
                                                
----------------------- debugLogs end: cilium-573269 [took: 3.676280032s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-573269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-573269
--- SKIP: TestNetworkPlugins/group/cilium (3.86s)

                                                
                                    
Copied to clipboard