Test Report: Docker_Linux_crio 20052

                    
                      8d1e3f592e1f661c71a144f8266060bd168d3f35:2024-12-05:37356
                    
                

Test fail (3/330)

Order failed test Duration
36 TestAddons/parallel/Ingress 151.51
38 TestAddons/parallel/MetricsServer 351.52
176 TestMultiControlPlane/serial/RestartCluster 125.16
x
+
TestAddons/parallel/Ingress (151.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-792804 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-792804 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-792804 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5f2a4e07-b429-4950-bd79-c2c78255bc7c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5f2a4e07-b429-4950-bd79-c2c78255bc7c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003613976s
I1205 19:06:25.546434 1006315 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-792804 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.04154914s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-792804 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-792804
helpers_test.go:235: (dbg) docker inspect addons-792804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "151493fe4197d437d0f42c889e92bf6474546eef6b7938c6592bbbd95f23401f",
	        "Created": "2024-12-05T19:03:41.28736008Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1008381,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-05T19:03:41.416115782Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/151493fe4197d437d0f42c889e92bf6474546eef6b7938c6592bbbd95f23401f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/151493fe4197d437d0f42c889e92bf6474546eef6b7938c6592bbbd95f23401f/hostname",
	        "HostsPath": "/var/lib/docker/containers/151493fe4197d437d0f42c889e92bf6474546eef6b7938c6592bbbd95f23401f/hosts",
	        "LogPath": "/var/lib/docker/containers/151493fe4197d437d0f42c889e92bf6474546eef6b7938c6592bbbd95f23401f/151493fe4197d437d0f42c889e92bf6474546eef6b7938c6592bbbd95f23401f-json.log",
	        "Name": "/addons-792804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-792804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-792804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9ea643850fb58865d701e1a36a7e9243c7873ab50c6754c5ac6c25b0fc6a16bd-init/diff:/var/lib/docker/overlay2/eeb994da5272b5c43f59ac5fc7f49f2b48f722f8f3da0a9c9746c4ff0b32901d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9ea643850fb58865d701e1a36a7e9243c7873ab50c6754c5ac6c25b0fc6a16bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9ea643850fb58865d701e1a36a7e9243c7873ab50c6754c5ac6c25b0fc6a16bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9ea643850fb58865d701e1a36a7e9243c7873ab50c6754c5ac6c25b0fc6a16bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-792804",
	                "Source": "/var/lib/docker/volumes/addons-792804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-792804",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-792804",
	                "name.minikube.sigs.k8s.io": "addons-792804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4822d7e0338dcd0d2529d3e4389ec00e9dd766e359b49100abae0e97270fd059",
	            "SandboxKey": "/var/run/docker/netns/4822d7e0338d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-792804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "230e1683a1408d00d56e983c87473d02202929efd49ac915a0c10e139c694e7e",
	                    "EndpointID": "9f5a5ad0fa823cd2b2ca76841ca932e97c39ebc3eb0f80db6e085da1e5bb76bd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-792804",
	                        "151493fe4197"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-792804 -n addons-792804
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-792804 logs -n 25: (1.18097208s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-589784                                                                     | download-only-589784   | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC | 05 Dec 24 19:03 UTC |
	| start   | --download-only -p                                                                          | download-docker-901904 | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC |                     |
	|         | download-docker-901904                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-901904                                                                   | download-docker-901904 | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC | 05 Dec 24 19:03 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-857636   | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC |                     |
	|         | binary-mirror-857636                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35259                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-857636                                                                     | binary-mirror-857636   | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC | 05 Dec 24 19:03 UTC |
	| addons  | disable dashboard -p                                                                        | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC |                     |
	|         | addons-792804                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC |                     |
	|         | addons-792804                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-792804 --wait=true                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC | 05 Dec 24 19:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:05 UTC | 05 Dec 24 19:05 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:05 UTC | 05 Dec 24 19:06 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | -p addons-792804                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-792804 addons                                                                        | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-792804 ip                                                                            | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-792804 ssh curl -s                                                                   | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ssh     | addons-792804 ssh cat                                                                       | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | /opt/local-path-provisioner/pvc-fdbafd17-1365-40d4-95e6-83d3408a157a_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:07 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-792804 addons                                                                        | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-792804 addons                                                                        | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:07 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:07 UTC | 05 Dec 24 19:07 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-792804 addons                                                                        | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:07 UTC | 05 Dec 24 19:07 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-792804 addons                                                                        | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:07 UTC | 05 Dec 24 19:07 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-792804 ip                                                                            | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:08 UTC | 05 Dec 24 19:08 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:03:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:03:19.361609 1007620 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:03:19.361876 1007620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:03:19.361887 1007620 out.go:358] Setting ErrFile to fd 2...
	I1205 19:03:19.361891 1007620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:03:19.362126 1007620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
	I1205 19:03:19.362745 1007620 out.go:352] Setting JSON to false
	I1205 19:03:19.363654 1007620 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":78350,"bootTime":1733347049,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:03:19.363763 1007620 start.go:139] virtualization: kvm guest
	I1205 19:03:19.366003 1007620 out.go:177] * [addons-792804] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:03:19.367413 1007620 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:03:19.367471 1007620 notify.go:220] Checking for updates...
	I1205 19:03:19.369671 1007620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:03:19.370869 1007620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig
	I1205 19:03:19.371998 1007620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube
	I1205 19:03:19.373183 1007620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:03:19.374350 1007620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:03:19.375556 1007620 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:03:19.397781 1007620 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 19:03:19.397911 1007620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:03:19.444934 1007620 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-05 19:03:19.436368323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:03:19.445049 1007620 docker.go:318] overlay module found
	I1205 19:03:19.446834 1007620 out.go:177] * Using the docker driver based on user configuration
	I1205 19:03:19.447976 1007620 start.go:297] selected driver: docker
	I1205 19:03:19.447989 1007620 start.go:901] validating driver "docker" against <nil>
	I1205 19:03:19.448001 1007620 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:03:19.448796 1007620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:03:19.495477 1007620 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-05 19:03:19.486453627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:03:19.495718 1007620 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 19:03:19.496007 1007620 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:03:19.497657 1007620 out.go:177] * Using Docker driver with root privileges
	I1205 19:03:19.498817 1007620 cni.go:84] Creating CNI manager for ""
	I1205 19:03:19.498879 1007620 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:03:19.498889 1007620 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:03:19.498945 1007620 start.go:340] cluster config:
	{Name:addons-792804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-792804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:03:19.500427 1007620 out.go:177] * Starting "addons-792804" primary control-plane node in "addons-792804" cluster
	I1205 19:03:19.501535 1007620 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:03:19.502611 1007620 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1205 19:03:19.503617 1007620 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:03:19.503646 1007620 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:03:19.503677 1007620 cache.go:56] Caching tarball of preloaded images
	I1205 19:03:19.503722 1007620 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1205 19:03:19.503759 1007620 preload.go:172] Found /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:03:19.503769 1007620 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:03:19.504083 1007620 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/config.json ...
	I1205 19:03:19.504114 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/config.json: {Name:mk9d633c942a45e5afc8a11b162149a265a14aca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:19.519545 1007620 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1205 19:03:19.519666 1007620 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1205 19:03:19.519681 1007620 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1205 19:03:19.519685 1007620 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1205 19:03:19.519692 1007620 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1205 19:03:19.519699 1007620 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from local cache
	I1205 19:03:31.339875 1007620 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from cached tarball
	I1205 19:03:31.339924 1007620 cache.go:194] Successfully downloaded all kic artifacts
	I1205 19:03:31.339974 1007620 start.go:360] acquireMachinesLock for addons-792804: {Name:mk10d4262ee22036cc298cfe9235901baa45df31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:03:31.340639 1007620 start.go:364] duration metric: took 641.892µs to acquireMachinesLock for "addons-792804"
	I1205 19:03:31.340666 1007620 start.go:93] Provisioning new machine with config: &{Name:addons-792804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-792804 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:03:31.340739 1007620 start.go:125] createHost starting for "" (driver="docker")
	I1205 19:03:31.342425 1007620 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1205 19:03:31.342651 1007620 start.go:159] libmachine.API.Create for "addons-792804" (driver="docker")
	I1205 19:03:31.342697 1007620 client.go:168] LocalClient.Create starting
	I1205 19:03:31.342783 1007620 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem
	I1205 19:03:31.546337 1007620 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem
	I1205 19:03:31.683901 1007620 cli_runner.go:164] Run: docker network inspect addons-792804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 19:03:31.700177 1007620 cli_runner.go:211] docker network inspect addons-792804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 19:03:31.700271 1007620 network_create.go:284] running [docker network inspect addons-792804] to gather additional debugging logs...
	I1205 19:03:31.700297 1007620 cli_runner.go:164] Run: docker network inspect addons-792804
	W1205 19:03:31.715260 1007620 cli_runner.go:211] docker network inspect addons-792804 returned with exit code 1
	I1205 19:03:31.715298 1007620 network_create.go:287] error running [docker network inspect addons-792804]: docker network inspect addons-792804: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-792804 not found
	I1205 19:03:31.715315 1007620 network_create.go:289] output of [docker network inspect addons-792804]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-792804 not found
	
	** /stderr **
	I1205 19:03:31.715436 1007620 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:03:31.731748 1007620 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cd0bc0}
	I1205 19:03:31.731806 1007620 network_create.go:124] attempt to create docker network addons-792804 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 19:03:31.731854 1007620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-792804 addons-792804
	I1205 19:03:31.790499 1007620 network_create.go:108] docker network addons-792804 192.168.49.0/24 created
	I1205 19:03:31.790527 1007620 kic.go:121] calculated static IP "192.168.49.2" for the "addons-792804" container
	I1205 19:03:31.790611 1007620 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 19:03:31.806982 1007620 cli_runner.go:164] Run: docker volume create addons-792804 --label name.minikube.sigs.k8s.io=addons-792804 --label created_by.minikube.sigs.k8s.io=true
	I1205 19:03:31.822909 1007620 oci.go:103] Successfully created a docker volume addons-792804
	I1205 19:03:31.822978 1007620 cli_runner.go:164] Run: docker run --rm --name addons-792804-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-792804 --entrypoint /usr/bin/test -v addons-792804:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
	I1205 19:03:36.809152 1007620 cli_runner.go:217] Completed: docker run --rm --name addons-792804-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-792804 --entrypoint /usr/bin/test -v addons-792804:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib: (4.986125676s)
	I1205 19:03:36.809185 1007620 oci.go:107] Successfully prepared a docker volume addons-792804
	I1205 19:03:36.809225 1007620 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:03:36.809256 1007620 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 19:03:36.809337 1007620 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-792804:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 19:03:41.227999 1007620 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-792804:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.418608225s)
	I1205 19:03:41.228034 1007620 kic.go:203] duration metric: took 4.418776675s to extract preloaded images to volume ...
	W1205 19:03:41.228173 1007620 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 19:03:41.228269 1007620 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 19:03:41.272773 1007620 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-792804 --name addons-792804 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-792804 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-792804 --network addons-792804 --ip 192.168.49.2 --volume addons-792804:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
	I1205 19:03:41.593370 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Running}}
	I1205 19:03:41.611024 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:03:41.628384 1007620 cli_runner.go:164] Run: docker exec addons-792804 stat /var/lib/dpkg/alternatives/iptables
	I1205 19:03:41.666564 1007620 oci.go:144] the created container "addons-792804" has a running status.
	I1205 19:03:41.666597 1007620 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa...
	I1205 19:03:41.766282 1007620 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 19:03:41.785812 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:03:41.801596 1007620 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 19:03:41.801615 1007620 kic_runner.go:114] Args: [docker exec --privileged addons-792804 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 19:03:41.844630 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:03:41.860857 1007620 machine.go:93] provisionDockerMachine start ...
	I1205 19:03:41.860946 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:41.877934 1007620 main.go:141] libmachine: Using SSH client type: native
	I1205 19:03:41.878228 1007620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 19:03:41.878248 1007620 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 19:03:41.878943 1007620 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42946->127.0.0.1:32768: read: connection reset by peer
	I1205 19:03:45.005423 1007620 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-792804
	
	I1205 19:03:45.005458 1007620 ubuntu.go:169] provisioning hostname "addons-792804"
	I1205 19:03:45.005516 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:45.022345 1007620 main.go:141] libmachine: Using SSH client type: native
	I1205 19:03:45.022530 1007620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 19:03:45.022543 1007620 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-792804 && echo "addons-792804" | sudo tee /etc/hostname
	I1205 19:03:45.156927 1007620 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-792804
	
	I1205 19:03:45.157040 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:45.174827 1007620 main.go:141] libmachine: Using SSH client type: native
	I1205 19:03:45.175005 1007620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 19:03:45.175031 1007620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-792804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-792804/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-792804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:03:45.297907 1007620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:03:45.297937 1007620 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20052-999445/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-999445/.minikube}
	I1205 19:03:45.297975 1007620 ubuntu.go:177] setting up certificates
	I1205 19:03:45.298007 1007620 provision.go:84] configureAuth start
	I1205 19:03:45.298078 1007620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-792804
	I1205 19:03:45.314966 1007620 provision.go:143] copyHostCerts
	I1205 19:03:45.315045 1007620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-999445/.minikube/ca.pem (1082 bytes)
	I1205 19:03:45.315160 1007620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-999445/.minikube/cert.pem (1123 bytes)
	I1205 19:03:45.315229 1007620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-999445/.minikube/key.pem (1675 bytes)
	I1205 19:03:45.315307 1007620 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca-key.pem org=jenkins.addons-792804 san=[127.0.0.1 192.168.49.2 addons-792804 localhost minikube]
	I1205 19:03:45.451014 1007620 provision.go:177] copyRemoteCerts
	I1205 19:03:45.451088 1007620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:03:45.451142 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:45.467782 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:03:45.558163 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 19:03:45.579143 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 19:03:45.600196 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 19:03:45.620351 1007620 provision.go:87] duration metric: took 322.327679ms to configureAuth
	I1205 19:03:45.620377 1007620 ubuntu.go:193] setting minikube options for container-runtime
	I1205 19:03:45.620538 1007620 config.go:182] Loaded profile config "addons-792804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:03:45.620642 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:45.636471 1007620 main.go:141] libmachine: Using SSH client type: native
	I1205 19:03:45.636632 1007620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 19:03:45.636648 1007620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:03:45.848877 1007620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:03:45.848909 1007620 machine.go:96] duration metric: took 3.988029585s to provisionDockerMachine
	I1205 19:03:45.848920 1007620 client.go:171] duration metric: took 14.506213736s to LocalClient.Create
	I1205 19:03:45.848940 1007620 start.go:167] duration metric: took 14.506288291s to libmachine.API.Create "addons-792804"
	I1205 19:03:45.848952 1007620 start.go:293] postStartSetup for "addons-792804" (driver="docker")
	I1205 19:03:45.848967 1007620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:03:45.849024 1007620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:03:45.849060 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:45.866377 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:03:45.962435 1007620 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:03:45.965267 1007620 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 19:03:45.965304 1007620 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 19:03:45.965320 1007620 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 19:03:45.965330 1007620 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1205 19:03:45.965343 1007620 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-999445/.minikube/addons for local assets ...
	I1205 19:03:45.965397 1007620 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-999445/.minikube/files for local assets ...
	I1205 19:03:45.965423 1007620 start.go:296] duration metric: took 116.464033ms for postStartSetup
	I1205 19:03:45.965678 1007620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-792804
	I1205 19:03:45.982453 1007620 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/config.json ...
	I1205 19:03:45.982677 1007620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:03:45.982719 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:45.997777 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:03:46.086573 1007620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 19:03:46.090799 1007620 start.go:128] duration metric: took 14.750046609s to createHost
	I1205 19:03:46.090826 1007620 start.go:83] releasing machines lock for "addons-792804", held for 14.750173592s
	I1205 19:03:46.090895 1007620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-792804
	I1205 19:03:46.107485 1007620 ssh_runner.go:195] Run: cat /version.json
	I1205 19:03:46.107531 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:46.107574 1007620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:03:46.107650 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:46.124478 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:03:46.125448 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:03:46.279554 1007620 ssh_runner.go:195] Run: systemctl --version
	I1205 19:03:46.283519 1007620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:03:46.419961 1007620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 19:03:46.424281 1007620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:03:46.441038 1007620 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 19:03:46.441087 1007620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:03:46.465976 1007620 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1205 19:03:46.466023 1007620 start.go:495] detecting cgroup driver to use...
	I1205 19:03:46.466059 1007620 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 19:03:46.466125 1007620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:03:46.480086 1007620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:03:46.489326 1007620 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:03:46.489366 1007620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:03:46.500864 1007620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:03:46.512719 1007620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:03:46.592208 1007620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:03:46.671761 1007620 docker.go:233] disabling docker service ...
	I1205 19:03:46.671821 1007620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:03:46.689293 1007620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:03:46.699058 1007620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:03:46.777544 1007620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:03:46.856047 1007620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:03:46.865855 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:03:46.879735 1007620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:03:46.879805 1007620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:03:46.888082 1007620 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:03:46.888139 1007620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:03:46.896292 1007620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:03:46.904335 1007620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:03:46.912466 1007620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:03:46.920151 1007620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:03:46.928160 1007620 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:03:46.941250 1007620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:03:46.949265 1007620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:03:46.956470 1007620 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:03:46.956517 1007620 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:03:46.968412 1007620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:03:46.976186 1007620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:03:47.047208 1007620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:03:47.138773 1007620 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:03:47.138857 1007620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:03:47.142285 1007620 start.go:563] Will wait 60s for crictl version
	I1205 19:03:47.142333 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:03:47.145252 1007620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:03:47.177592 1007620 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 19:03:47.177679 1007620 ssh_runner.go:195] Run: crio --version
	I1205 19:03:47.210525 1007620 ssh_runner.go:195] Run: crio --version
	I1205 19:03:47.244272 1007620 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1205 19:03:47.245513 1007620 cli_runner.go:164] Run: docker network inspect addons-792804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:03:47.261863 1007620 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 19:03:47.265346 1007620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:03:47.275328 1007620 kubeadm.go:883] updating cluster {Name:addons-792804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-792804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 19:03:47.275439 1007620 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:03:47.275480 1007620 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:03:47.340625 1007620 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:03:47.340651 1007620 crio.go:433] Images already preloaded, skipping extraction
	I1205 19:03:47.340708 1007620 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:03:47.371963 1007620 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:03:47.371988 1007620 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:03:47.371999 1007620 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1205 19:03:47.372122 1007620 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-792804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-792804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:03:47.372207 1007620 ssh_runner.go:195] Run: crio config
	I1205 19:03:47.412269 1007620 cni.go:84] Creating CNI manager for ""
	I1205 19:03:47.412290 1007620 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:03:47.412301 1007620 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 19:03:47.412325 1007620 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-792804 NodeName:addons-792804 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:03:47.412482 1007620 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-792804"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:03:47.412555 1007620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:03:47.420660 1007620 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:03:47.420710 1007620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 19:03:47.428022 1007620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1205 19:03:47.443248 1007620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:03:47.458550 1007620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1205 19:03:47.473278 1007620 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 19:03:47.476114 1007620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:03:47.485217 1007620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:03:47.554279 1007620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:03:47.565710 1007620 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804 for IP: 192.168.49.2
	I1205 19:03:47.565741 1007620 certs.go:194] generating shared ca certs ...
	I1205 19:03:47.565767 1007620 certs.go:226] acquiring lock for ca certs: {Name:mk27706fe4627f850c07ffcdfc76cdd3f60bd8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:47.565887 1007620 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key
	I1205 19:03:48.115880 1007620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt ...
	I1205 19:03:48.115916 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt: {Name:mkd39417c4cc8ca1b9b6fcb39e8efed056212001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.116102 1007620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key ...
	I1205 19:03:48.116118 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key: {Name:mk695f58db5d52d7c0027448e60494b13134bb2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.116195 1007620 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key
	I1205 19:03:48.194719 1007620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.crt ...
	I1205 19:03:48.194746 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.crt: {Name:mk3e9d1f62ee9c100c195c9fb75a0f6fc7801ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.194908 1007620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key ...
	I1205 19:03:48.194919 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key: {Name:mk83c7b827b2002819a89d4eadf05e2df95b9691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.195838 1007620 certs.go:256] generating profile certs ...
	I1205 19:03:48.195921 1007620 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.key
	I1205 19:03:48.195936 1007620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt with IP's: []
	I1205 19:03:48.454671 1007620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt ...
	I1205 19:03:48.454699 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: {Name:mk19e37b5ba8af69968fdb70a6516b1c1949315c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.454861 1007620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.key ...
	I1205 19:03:48.454872 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.key: {Name:mk89f0f5e6cc05ec6c7365db3a020f83ffeabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.455627 1007620 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.key.743a5472
	I1205 19:03:48.455648 1007620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.crt.743a5472 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1205 19:03:48.564436 1007620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.crt.743a5472 ...
	I1205 19:03:48.564466 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.crt.743a5472: {Name:mk94c52999036ab21c334b181980b7208d83c549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.565286 1007620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.key.743a5472 ...
	I1205 19:03:48.565303 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.key.743a5472: {Name:mkb0997134fef5b507458a367d95814cd530319c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.565792 1007620 certs.go:381] copying /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.crt.743a5472 -> /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.crt
	I1205 19:03:48.565865 1007620 certs.go:385] copying /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.key.743a5472 -> /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.key
	I1205 19:03:48.565909 1007620 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.key
	I1205 19:03:48.565928 1007620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.crt with IP's: []
	I1205 19:03:48.668882 1007620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.crt ...
	I1205 19:03:48.668913 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.crt: {Name:mkc43cd53e1b9b593b1e3cd6970ec1fcb81b5e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.669896 1007620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.key ...
	I1205 19:03:48.669926 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.key: {Name:mkdc7be51d8e9542fc4cd9c2a89e17e3aedb0f0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.670179 1007620 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 19:03:48.670224 1007620 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem (1082 bytes)
	I1205 19:03:48.670258 1007620 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:03:48.670288 1007620 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem (1675 bytes)
	I1205 19:03:48.670925 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:03:48.693755 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:03:48.714277 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:03:48.734277 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:03:48.754422 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 19:03:48.774461 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 19:03:48.794150 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:03:48.813707 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:03:48.833461 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:03:48.853347 1007620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:03:48.868034 1007620 ssh_runner.go:195] Run: openssl version
	I1205 19:03:48.872761 1007620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:03:48.880488 1007620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:03:48.883433 1007620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:03:48.883471 1007620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:03:48.889301 1007620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:03:48.897023 1007620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:03:48.899730 1007620 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:03:48.899779 1007620 kubeadm.go:392] StartCluster: {Name:addons-792804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-792804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:03:48.899863 1007620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:03:48.899899 1007620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:03:48.932541 1007620 cri.go:89] found id: ""
	I1205 19:03:48.932592 1007620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:03:48.940251 1007620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:03:48.947685 1007620 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1205 19:03:48.947731 1007620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:03:48.954924 1007620 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:03:48.954943 1007620 kubeadm.go:157] found existing configuration files:
	
	I1205 19:03:48.954983 1007620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 19:03:48.962093 1007620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 19:03:48.962144 1007620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 19:03:48.968936 1007620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 19:03:48.976251 1007620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 19:03:48.976292 1007620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 19:03:48.983377 1007620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 19:03:48.990588 1007620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 19:03:48.990638 1007620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 19:03:48.997528 1007620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 19:03:49.004655 1007620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 19:03:49.004704 1007620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 19:03:49.011731 1007620 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 19:03:49.064470 1007620 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1205 19:03:49.116037 1007620 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:03:58.187135 1007620 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 19:03:58.187222 1007620 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 19:03:58.187341 1007620 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1205 19:03:58.187421 1007620 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1205 19:03:58.187463 1007620 kubeadm.go:310] OS: Linux
	I1205 19:03:58.187543 1007620 kubeadm.go:310] CGROUPS_CPU: enabled
	I1205 19:03:58.187611 1007620 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1205 19:03:58.187676 1007620 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1205 19:03:58.187720 1007620 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1205 19:03:58.187774 1007620 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1205 19:03:58.187816 1007620 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1205 19:03:58.187858 1007620 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1205 19:03:58.187933 1007620 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1205 19:03:58.187974 1007620 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1205 19:03:58.188057 1007620 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:03:58.188163 1007620 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:03:58.188276 1007620 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 19:03:58.188368 1007620 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:03:58.189898 1007620 out.go:235]   - Generating certificates and keys ...
	I1205 19:03:58.189976 1007620 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 19:03:58.190057 1007620 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 19:03:58.190135 1007620 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:03:58.190192 1007620 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:03:58.190275 1007620 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:03:58.190361 1007620 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 19:03:58.190439 1007620 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 19:03:58.190593 1007620 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-792804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:03:58.190675 1007620 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 19:03:58.190845 1007620 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-792804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:03:58.190947 1007620 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:03:58.191004 1007620 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:03:58.191042 1007620 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 19:03:58.191090 1007620 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:03:58.191153 1007620 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:03:58.191242 1007620 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 19:03:58.191318 1007620 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:03:58.191408 1007620 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:03:58.191490 1007620 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:03:58.191597 1007620 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:03:58.191658 1007620 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:03:58.192914 1007620 out.go:235]   - Booting up control plane ...
	I1205 19:03:58.193014 1007620 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:03:58.193130 1007620 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:03:58.193220 1007620 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:03:58.193363 1007620 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:03:58.193470 1007620 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:03:58.193533 1007620 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 19:03:58.193672 1007620 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 19:03:58.193804 1007620 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 19:03:58.193863 1007620 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.520234ms
	I1205 19:03:58.193919 1007620 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 19:03:58.193969 1007620 kubeadm.go:310] [api-check] The API server is healthy after 4.501573723s
	I1205 19:03:58.194111 1007620 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:03:58.194259 1007620 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:03:58.194317 1007620 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:03:58.194488 1007620 kubeadm.go:310] [mark-control-plane] Marking the node addons-792804 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:03:58.194567 1007620 kubeadm.go:310] [bootstrap-token] Using token: o65g28.x4zn8lu1bzt9a8ym
	I1205 19:03:58.196441 1007620 out.go:235]   - Configuring RBAC rules ...
	I1205 19:03:58.196567 1007620 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:03:58.196666 1007620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:03:58.196798 1007620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:03:58.196955 1007620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:03:58.197065 1007620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:03:58.197136 1007620 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:03:58.197230 1007620 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:03:58.197268 1007620 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 19:03:58.197307 1007620 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 19:03:58.197313 1007620 kubeadm.go:310] 
	I1205 19:03:58.197367 1007620 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 19:03:58.197373 1007620 kubeadm.go:310] 
	I1205 19:03:58.197435 1007620 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 19:03:58.197441 1007620 kubeadm.go:310] 
	I1205 19:03:58.197483 1007620 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 19:03:58.197564 1007620 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:03:58.197638 1007620 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:03:58.197653 1007620 kubeadm.go:310] 
	I1205 19:03:58.197735 1007620 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 19:03:58.197743 1007620 kubeadm.go:310] 
	I1205 19:03:58.197807 1007620 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:03:58.197816 1007620 kubeadm.go:310] 
	I1205 19:03:58.197887 1007620 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 19:03:58.197985 1007620 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:03:58.198104 1007620 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:03:58.198117 1007620 kubeadm.go:310] 
	I1205 19:03:58.198231 1007620 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:03:58.198342 1007620 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 19:03:58.198353 1007620 kubeadm.go:310] 
	I1205 19:03:58.198442 1007620 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o65g28.x4zn8lu1bzt9a8ym \
	I1205 19:03:58.198590 1007620 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a2c5b2427d3018001d5805cd98bff895dd85ff4c852b0a0e57d4b3015d0f3ecb \
	I1205 19:03:58.198631 1007620 kubeadm.go:310] 	--control-plane 
	I1205 19:03:58.198647 1007620 kubeadm.go:310] 
	I1205 19:03:58.198747 1007620 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:03:58.198755 1007620 kubeadm.go:310] 
	I1205 19:03:58.198836 1007620 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o65g28.x4zn8lu1bzt9a8ym \
	I1205 19:03:58.198929 1007620 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a2c5b2427d3018001d5805cd98bff895dd85ff4c852b0a0e57d4b3015d0f3ecb 
	I1205 19:03:58.198953 1007620 cni.go:84] Creating CNI manager for ""
	I1205 19:03:58.198967 1007620 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:03:58.200258 1007620 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 19:03:58.201408 1007620 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 19:03:58.205013 1007620 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1205 19:03:58.205028 1007620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 19:03:58.221220 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 19:03:58.405175 1007620 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:03:58.405258 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:58.405260 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-792804 minikube.k8s.io/updated_at=2024_12_05T19_03_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=addons-792804 minikube.k8s.io/primary=true
	I1205 19:03:58.412387 1007620 ops.go:34] apiserver oom_adj: -16
	I1205 19:03:58.492387 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:58.993470 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:59.493329 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:59.993064 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:04:00.492953 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:04:00.993186 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:04:01.492961 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:04:01.992512 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:04:02.492887 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:04:02.557796 1007620 kubeadm.go:1113] duration metric: took 4.152605202s to wait for elevateKubeSystemPrivileges
	I1205 19:04:02.557833 1007620 kubeadm.go:394] duration metric: took 13.658058625s to StartCluster
	I1205 19:04:02.557854 1007620 settings.go:142] acquiring lock: {Name:mk8cc47684b2d9b56f7c67a506188e087d04cea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:04:02.557965 1007620 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-999445/kubeconfig
	I1205 19:04:02.558353 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/kubeconfig: {Name:mk9f3e1f3f15e579e42360c3cd96b3ca0e071da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:04:02.558560 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:04:02.558573 1007620 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:04:02.558641 1007620 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1205 19:04:02.558794 1007620 addons.go:69] Setting yakd=true in profile "addons-792804"
	I1205 19:04:02.558808 1007620 addons.go:69] Setting ingress=true in profile "addons-792804"
	I1205 19:04:02.558824 1007620 config.go:182] Loaded profile config "addons-792804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:04:02.558831 1007620 addons.go:69] Setting storage-provisioner=true in profile "addons-792804"
	I1205 19:04:02.558838 1007620 addons.go:234] Setting addon ingress=true in "addons-792804"
	I1205 19:04:02.558846 1007620 addons.go:234] Setting addon storage-provisioner=true in "addons-792804"
	I1205 19:04:02.558840 1007620 addons.go:69] Setting registry=true in profile "addons-792804"
	I1205 19:04:02.558861 1007620 addons.go:69] Setting cloud-spanner=true in profile "addons-792804"
	I1205 19:04:02.558869 1007620 addons.go:234] Setting addon registry=true in "addons-792804"
	I1205 19:04:02.558874 1007620 addons.go:69] Setting volcano=true in profile "addons-792804"
	I1205 19:04:02.558880 1007620 addons.go:234] Setting addon cloud-spanner=true in "addons-792804"
	I1205 19:04:02.558868 1007620 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-792804"
	I1205 19:04:02.558887 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.558895 1007620 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-792804"
	I1205 19:04:02.558900 1007620 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-792804"
	I1205 19:04:02.558917 1007620 addons.go:69] Setting ingress-dns=true in profile "addons-792804"
	I1205 19:04:02.558920 1007620 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-792804"
	I1205 19:04:02.558935 1007620 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-792804"
	I1205 19:04:02.558949 1007620 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-792804"
	I1205 19:04:02.558903 1007620 addons.go:69] Setting default-storageclass=true in profile "addons-792804"
	I1205 19:04:02.558975 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.558988 1007620 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-792804"
	I1205 19:04:02.558949 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.558826 1007620 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-792804"
	I1205 19:04:02.559138 1007620 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-792804"
	I1205 19:04:02.559170 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.558918 1007620 addons.go:69] Setting metrics-server=true in profile "addons-792804"
	I1205 19:04:02.559257 1007620 addons.go:234] Setting addon metrics-server=true in "addons-792804"
	I1205 19:04:02.559290 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.559321 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.559324 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.558908 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.559597 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.559627 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.558822 1007620 addons.go:234] Setting addon yakd=true in "addons-792804"
	I1205 19:04:02.559679 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.559739 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.559997 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.558911 1007620 addons.go:69] Setting gcp-auth=true in profile "addons-792804"
	I1205 19:04:02.558929 1007620 addons.go:234] Setting addon ingress-dns=true in "addons-792804"
	I1205 19:04:02.558929 1007620 addons.go:69] Setting inspektor-gadget=true in profile "addons-792804"
	I1205 19:04:02.558886 1007620 addons.go:234] Setting addon volcano=true in "addons-792804"
	I1205 19:04:02.558893 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.559500 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.560064 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.560115 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.559576 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.560533 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.558923 1007620 addons.go:69] Setting volumesnapshots=true in profile "addons-792804"
	I1205 19:04:02.560712 1007620 addons.go:234] Setting addon volumesnapshots=true in "addons-792804"
	I1205 19:04:02.560745 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.561168 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.565943 1007620 addons.go:234] Setting addon inspektor-gadget=true in "addons-792804"
	I1205 19:04:02.566055 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.566170 1007620 out.go:177] * Verifying Kubernetes components...
	I1205 19:04:02.566632 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.558908 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.566864 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.566904 1007620 mustload.go:65] Loading cluster: addons-792804
	I1205 19:04:02.567559 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.567882 1007620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:04:02.571016 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.594801 1007620 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-792804"
	I1205 19:04:02.594860 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.595386 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.598432 1007620 config.go:182] Loaded profile config "addons-792804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:04:02.598656 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.598898 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.631780 1007620 addons.go:234] Setting addon default-storageclass=true in "addons-792804"
	I1205 19:04:02.631843 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.632100 1007620 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1205 19:04:02.632144 1007620 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1205 19:04:02.632175 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 19:04:02.632296 1007620 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1205 19:04:02.632521 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.634245 1007620 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1205 19:04:02.634268 1007620 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1205 19:04:02.634349 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.634921 1007620 out.go:177]   - Using image docker.io/registry:2.8.3
	I1205 19:04:02.635410 1007620 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:04:02.635440 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1205 19:04:02.634962 1007620 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:04:02.635519 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.635982 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 19:04:02.636243 1007620 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1205 19:04:02.637373 1007620 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 19:04:02.637398 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1205 19:04:02.637399 1007620 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:04:02.637419 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:04:02.637457 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.637473 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.642795 1007620 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:04:02.642818 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 19:04:02.642878 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.643957 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 19:04:02.649491 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 19:04:02.650630 1007620 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1205 19:04:02.651642 1007620 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1205 19:04:02.651672 1007620 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1205 19:04:02.651745 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.655537 1007620 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1205 19:04:02.657475 1007620 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1205 19:04:02.657481 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 19:04:02.657627 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 19:04:02.658751 1007620 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 19:04:02.658772 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1205 19:04:02.658845 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.659123 1007620 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 19:04:02.659140 1007620 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 19:04:02.659222 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.659437 1007620 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 19:04:02.659463 1007620 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 19:04:02.659518 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.661392 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 19:04:02.663104 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W1205 19:04:02.663907 1007620 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1205 19:04:02.665173 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 19:04:02.666263 1007620 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 19:04:02.666282 1007620 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 19:04:02.666360 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.683325 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.688053 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.691878 1007620 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1205 19:04:02.694507 1007620 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 19:04:02.701951 1007620 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 19:04:02.704418 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.704429 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.708379 1007620 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:04:02.708401 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1205 19:04:02.708464 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.711488 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.722361 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:04:02.723912 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.725384 1007620 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:04:02.725401 1007620 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:04:02.725458 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.731260 1007620 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 19:04:02.732524 1007620 out.go:177]   - Using image docker.io/busybox:stable
	I1205 19:04:02.733905 1007620 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:04:02.733924 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 19:04:02.733981 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.735608 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.736265 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.736689 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.736765 1007620 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1205 19:04:02.737842 1007620 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1205 19:04:02.737859 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 19:04:02.737896 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.746320 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.748285 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.758202 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.758939 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.759016 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.760183 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	W1205 19:04:02.778435 1007620 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 19:04:02.778467 1007620 retry.go:31] will retry after 165.167607ms: ssh: handshake failed: EOF
	I1205 19:04:02.799206 1007620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:04:02.996129 1007620 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1205 19:04:02.996219 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1205 19:04:03.175279 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:04:03.181625 1007620 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1205 19:04:03.181655 1007620 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1205 19:04:03.191782 1007620 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 19:04:03.191811 1007620 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 19:04:03.192159 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:04:03.192745 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:04:03.275799 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1205 19:04:03.282415 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:04:03.283831 1007620 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 19:04:03.283907 1007620 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 19:04:03.378816 1007620 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 19:04:03.378863 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 19:04:03.383131 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:04:03.477886 1007620 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 19:04:03.477969 1007620 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 19:04:03.577442 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 19:04:03.580703 1007620 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1205 19:04:03.580744 1007620 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1205 19:04:03.583828 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 19:04:03.585260 1007620 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 19:04:03.585282 1007620 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 19:04:03.776297 1007620 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:04:03.776331 1007620 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 19:04:03.776421 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:04:03.781382 1007620 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 19:04:03.781406 1007620 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 19:04:03.788565 1007620 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 19:04:03.788635 1007620 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 19:04:03.792573 1007620 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 19:04:03.792624 1007620 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 19:04:03.875570 1007620 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1205 19:04:03.875660 1007620 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1205 19:04:04.275776 1007620 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 19:04:04.275924 1007620 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 19:04:04.284151 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:04:04.288526 1007620 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:04:04.288601 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 19:04:04.298677 1007620 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.576283327s)
	I1205 19:04:04.298756 1007620 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1205 19:04:04.298766 1007620 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.499512993s)
	I1205 19:04:04.299762 1007620 node_ready.go:35] waiting up to 6m0s for node "addons-792804" to be "Ready" ...
	I1205 19:04:04.387253 1007620 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1205 19:04:04.387285 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1205 19:04:04.396311 1007620 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 19:04:04.396365 1007620 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 19:04:04.476743 1007620 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 19:04:04.476853 1007620 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 19:04:04.482158 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:04:04.576779 1007620 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 19:04:04.576829 1007620 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 19:04:04.679807 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.504400619s)
	I1205 19:04:04.774610 1007620 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:04:04.774640 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 19:04:04.775377 1007620 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 19:04:04.775428 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 19:04:04.789082 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1205 19:04:04.975404 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:04:05.083956 1007620 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 19:04:05.084046 1007620 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 19:04:05.095754 1007620 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-792804" context rescaled to 1 replicas
	I1205 19:04:05.490388 1007620 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 19:04:05.490515 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 19:04:06.084359 1007620 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 19:04:06.084451 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 19:04:06.294036 1007620 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:04:06.294085 1007620 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 19:04:06.478896 1007620 node_ready.go:53] node "addons-792804" has status "Ready":"False"
	I1205 19:04:06.491602 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:04:06.497526 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.305327687s)
	I1205 19:04:07.285429 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.092642682s)
	I1205 19:04:07.891666 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.615765092s)
	I1205 19:04:08.881143 1007620 node_ready.go:53] node "addons-792804" has status "Ready":"False"
	I1205 19:04:08.982382 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.69987011s)
	I1205 19:04:08.982855 1007620 addons.go:475] Verifying addon ingress=true in "addons-792804"
	I1205 19:04:08.982544 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.599310654s)
	I1205 19:04:08.982576 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.405103748s)
	I1205 19:04:08.982601 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.398751526s)
	I1205 19:04:08.982668 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.20618809s)
	I1205 19:04:08.982732 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.698495881s)
	I1205 19:04:08.983668 1007620 addons.go:475] Verifying addon metrics-server=true in "addons-792804"
	I1205 19:04:08.982765 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.500510747s)
	I1205 19:04:08.983709 1007620 addons.go:475] Verifying addon registry=true in "addons-792804"
	I1205 19:04:08.982803 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.19365918s)
	I1205 19:04:08.984626 1007620 out.go:177] * Verifying ingress addon...
	I1205 19:04:08.986397 1007620 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-792804 service yakd-dashboard -n yakd-dashboard
	
	I1205 19:04:08.986403 1007620 out.go:177] * Verifying registry addon...
	I1205 19:04:08.987606 1007620 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 19:04:08.988613 1007620 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 19:04:08.998611 1007620 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:04:08.998636 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:08.998999 1007620 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 19:04:08.999020 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:09.492780 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:09.493281 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:09.888140 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.912612293s)
	W1205 19:04:09.888188 1007620 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:04:09.888227 1007620 retry.go:31] will retry after 256.21842ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:04:09.892416 1007620 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 19:04:09.892499 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:09.913937 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:09.994214 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:09.994847 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:10.145266 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:04:10.194328 1007620 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 19:04:10.279820 1007620 addons.go:234] Setting addon gcp-auth=true in "addons-792804"
	I1205 19:04:10.279893 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:10.280390 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:10.308090 1007620 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 19:04:10.308151 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:10.325462 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:10.493305 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:10.493654 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:10.707948 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.216216245s)
	I1205 19:04:10.708052 1007620 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-792804"
	I1205 19:04:10.709575 1007620 out.go:177] * Verifying csi-hostpath-driver addon...
	I1205 19:04:10.711708 1007620 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 19:04:10.779732 1007620 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:04:10.779765 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:10.991444 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:10.991765 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:11.215584 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:11.302999 1007620 node_ready.go:53] node "addons-792804" has status "Ready":"False"
	I1205 19:04:11.491322 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:11.491401 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:11.715275 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:11.991815 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:11.992097 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:12.215221 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:12.491541 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:12.491927 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:12.715605 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:12.991359 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:12.991519 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:13.143033 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.997722111s)
	I1205 19:04:13.143122 1007620 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.834997313s)
	I1205 19:04:13.144846 1007620 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1205 19:04:13.146155 1007620 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 19:04:13.147240 1007620 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 19:04:13.147254 1007620 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 19:04:13.163751 1007620 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 19:04:13.163769 1007620 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 19:04:13.179357 1007620 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:04:13.179379 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1205 19:04:13.194849 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:04:13.215772 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:13.303498 1007620 node_ready.go:53] node "addons-792804" has status "Ready":"False"
	I1205 19:04:13.492044 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:13.492655 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:13.498057 1007620 addons.go:475] Verifying addon gcp-auth=true in "addons-792804"
	I1205 19:04:13.499511 1007620 out.go:177] * Verifying gcp-auth addon...
	I1205 19:04:13.501712 1007620 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 19:04:13.504094 1007620 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 19:04:13.504113 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:13.715563 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:13.991679 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:13.992130 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:14.004803 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:14.214554 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:14.491427 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:14.491459 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:14.504078 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:14.715151 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:14.991944 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:14.991994 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:15.004709 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:15.215501 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:15.491659 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:15.491691 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:15.504549 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:15.715834 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:15.802857 1007620 node_ready.go:53] node "addons-792804" has status "Ready":"False"
	I1205 19:04:15.991342 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:15.991705 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:16.004108 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:16.215200 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:16.492097 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:16.492411 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:16.592038 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:16.715013 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:16.991782 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:16.991976 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:17.004872 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:17.215832 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:17.491541 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:17.491657 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:17.504368 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:17.715534 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:17.991626 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:17.991932 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:18.004620 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:18.215966 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:18.303657 1007620 node_ready.go:53] node "addons-792804" has status "Ready":"False"
	I1205 19:04:18.491895 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:18.492365 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:18.505759 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:18.716161 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:18.991754 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:18.992379 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:19.005612 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:19.215413 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:19.491343 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:19.491366 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:19.504168 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:19.715280 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:19.991787 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:19.991983 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:20.004853 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:20.214773 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:20.491335 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:20.491514 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:20.504127 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:20.715181 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:20.803342 1007620 node_ready.go:53] node "addons-792804" has status "Ready":"False"
	I1205 19:04:20.991904 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:20.991958 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:21.004908 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:21.215050 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:21.491522 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:21.492144 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:21.504571 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:21.723895 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:21.802568 1007620 node_ready.go:49] node "addons-792804" has status "Ready":"True"
	I1205 19:04:21.802595 1007620 node_ready.go:38] duration metric: took 17.502805167s for node "addons-792804" to be "Ready" ...
	I1205 19:04:21.802605 1007620 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:04:21.811716 1007620 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace to be "Ready" ...
	I1205 19:04:22.004583 1007620 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:04:22.004689 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:22.005029 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:22.083234 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:22.216678 1007620 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:04:22.216706 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:22.492971 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:22.493845 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:22.592173 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:22.716562 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:22.992138 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:22.992560 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:23.005664 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:23.216382 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:23.491934 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:23.492248 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:23.505499 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:23.716877 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:23.817245 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:23.991822 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:23.992097 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:24.005140 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:24.215969 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:24.492109 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:24.492351 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:24.504903 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:24.716159 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:24.991911 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:24.992207 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:25.004780 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:25.217615 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:25.491639 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:25.492047 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:25.504990 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:25.716809 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:25.817582 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:25.992458 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:25.992996 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:26.004646 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:26.216928 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:26.491563 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:26.491616 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:26.504769 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:26.716663 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:26.991886 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:26.991888 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:27.004607 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:27.216389 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:27.491803 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:27.492020 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:27.505150 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:27.716775 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:27.818546 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:27.992027 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:27.992413 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:28.005136 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:28.215906 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:28.491573 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:28.491697 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:28.504300 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:28.716056 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:28.992101 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:28.992578 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:29.004819 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:29.216463 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:29.492503 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:29.493298 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:29.505172 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:29.777983 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:29.875946 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:29.992463 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:29.993335 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:30.005541 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:30.217111 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:30.491692 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:30.491918 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:30.504067 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:30.717485 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:30.992093 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:30.992213 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:31.004792 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:31.216980 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:31.494452 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:31.494783 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:31.504443 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:31.716195 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:31.991528 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:31.991568 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:32.004421 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:32.216155 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:32.317048 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:32.491718 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:32.491873 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:32.504429 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:32.716323 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:32.992977 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:32.993868 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:33.005024 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:33.279812 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:33.491992 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:33.492875 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:33.504662 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:33.775573 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:33.994796 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:33.995866 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:34.004780 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:34.217395 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:34.317590 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:34.492059 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:34.492457 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:34.505553 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:34.716745 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:34.992823 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:34.992839 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:35.005268 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:35.216948 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:35.492359 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:35.492481 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:35.505918 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:35.717333 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:35.992557 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:35.992773 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:36.004907 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:36.217589 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:36.317973 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:36.491989 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:36.492559 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:36.504991 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:36.716054 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:36.992455 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:36.993047 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:37.092753 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:37.216859 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:37.492236 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:37.492579 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:37.504816 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:37.717599 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:37.992243 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:37.992370 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:38.004376 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:38.216651 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:38.318631 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:38.492413 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:38.492755 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:38.505654 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:38.716618 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:38.992189 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:38.992844 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:39.003677 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:39.216452 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:39.491600 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:39.492088 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:39.504960 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:39.716509 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:39.993161 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:39.994377 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:40.080553 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:40.217477 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:40.491856 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:40.492517 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:40.505332 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:40.716340 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:40.817500 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:40.992446 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:40.992706 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:41.004485 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:41.216322 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:41.492315 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:41.492414 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:41.505001 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:41.715955 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:41.992201 1007620 kapi.go:107] duration metric: took 33.003584409s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 19:04:41.992615 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:42.005049 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:42.216111 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:42.493251 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:42.504481 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:42.717736 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:42.817969 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:42.992328 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:43.005562 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:43.217197 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:43.492763 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:43.505865 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:43.716814 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:43.992644 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:44.005452 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:44.216445 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:44.491853 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:44.504603 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:44.716609 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:44.818156 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:44.991926 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:45.004743 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:45.217105 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:45.491980 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:45.504988 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:45.716943 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:45.993289 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:46.004807 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:46.217342 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:46.492297 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:46.591952 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:46.716545 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:46.992952 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:47.005030 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:47.216218 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:47.317682 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:47.492457 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:47.505374 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:47.716442 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:47.992033 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:48.004636 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:48.216259 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:48.492238 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:48.504962 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:48.716018 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:48.991565 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:49.092054 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:49.215701 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:49.317729 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:49.491805 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:49.504455 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:49.716126 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:49.992669 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:50.004646 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:50.217628 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:50.492655 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:50.579495 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:50.784766 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:50.993048 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:51.077699 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:51.289583 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:51.380887 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:51.493492 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:51.577722 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:51.779100 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:51.992627 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:52.075341 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:52.219395 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:52.492374 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:52.505369 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:52.717215 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:52.993271 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:53.004964 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:53.215851 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:53.492322 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:53.504847 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:53.717435 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:53.817243 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:53.991958 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:54.005540 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:54.217459 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:54.492006 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:54.504952 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:54.716859 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:54.991985 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:55.006786 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:55.217317 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:55.491899 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:55.504812 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:55.717657 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:55.823182 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:55.993537 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:56.004796 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:56.221694 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:56.492247 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:56.505085 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:56.717328 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:56.992420 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:57.005279 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:57.216280 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:57.492045 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:57.504913 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:57.716386 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:57.992228 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:58.005178 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:58.216512 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:58.318408 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:58.492974 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:58.504597 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:58.716868 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:58.993027 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:59.020298 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:59.216160 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:59.493395 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:59.504947 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:59.716041 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:59.992334 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:00.076931 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:00.279003 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:00.379950 1007620 pod_ready.go:93] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:00.379982 1007620 pod_ready.go:82] duration metric: took 38.568236705s for pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.379997 1007620 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7qzsp" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.385793 1007620 pod_ready.go:93] pod "coredns-7c65d6cfc9-7qzsp" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:00.385865 1007620 pod_ready.go:82] duration metric: took 5.858127ms for pod "coredns-7c65d6cfc9-7qzsp" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.385898 1007620 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.397206 1007620 pod_ready.go:93] pod "etcd-addons-792804" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:00.397234 1007620 pod_ready.go:82] duration metric: took 11.323042ms for pod "etcd-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.397252 1007620 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.402610 1007620 pod_ready.go:93] pod "kube-apiserver-addons-792804" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:00.402632 1007620 pod_ready.go:82] duration metric: took 5.37105ms for pod "kube-apiserver-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.402644 1007620 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.476255 1007620 pod_ready.go:93] pod "kube-controller-manager-addons-792804" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:00.476337 1007620 pod_ready.go:82] duration metric: took 73.670892ms for pod "kube-controller-manager-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.476371 1007620 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t8lq4" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.492579 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:00.577170 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:00.715661 1007620 pod_ready.go:93] pod "kube-proxy-t8lq4" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:00.715684 1007620 pod_ready.go:82] duration metric: took 239.302942ms for pod "kube-proxy-t8lq4" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.715694 1007620 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.716182 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:00.992224 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:01.005747 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:01.115526 1007620 pod_ready.go:93] pod "kube-scheduler-addons-792804" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:01.115553 1007620 pod_ready.go:82] duration metric: took 399.852309ms for pod "kube-scheduler-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:01.115568 1007620 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:01.217287 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:01.492375 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:01.505446 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:01.716353 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:01.991828 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:02.004695 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:02.216469 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:02.491962 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:02.591995 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:02.717100 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:02.992167 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:03.005049 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:03.121460 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:03.216151 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:03.492731 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:03.505321 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:03.715898 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:03.992285 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:04.004820 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:04.216904 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:04.492154 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:04.505078 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:04.716867 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:04.991994 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:05.004753 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:05.121862 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:05.217405 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:05.494159 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:05.578328 1007620 kapi.go:107] duration metric: took 52.076611255s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 19:05:05.580588 1007620 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-792804 cluster.
	I1205 19:05:05.582036 1007620 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 19:05:05.583247 1007620 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 19:05:05.779495 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:05.993870 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:06.276769 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:06.494089 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:06.779267 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:06.992617 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:07.122046 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:07.217517 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:07.492899 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:07.716234 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:07.992083 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:08.217205 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:08.493003 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:08.717144 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:08.992118 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:09.122125 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:09.216970 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:09.492408 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:09.717233 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:09.992705 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:10.218531 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:10.492735 1007620 kapi.go:107] duration metric: took 1m1.505124576s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 19:05:10.716710 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:11.215419 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:11.679206 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:11.779478 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:12.217481 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:12.716044 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:13.216362 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:13.716958 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:14.122038 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:14.216682 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:14.716235 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:15.216503 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:15.716458 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:16.153259 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:16.220642 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:16.715985 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:17.216596 1007620 kapi.go:107] duration metric: took 1m6.504888403s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 19:05:17.217976 1007620 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, inspektor-gadget, cloud-spanner, amd-gpu-device-plugin, metrics-server, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1205 19:05:17.219063 1007620 addons.go:510] duration metric: took 1m14.660435152s for enable addons: enabled=[nvidia-device-plugin ingress-dns storage-provisioner inspektor-gadget cloud-spanner amd-gpu-device-plugin metrics-server yakd default-storageclass storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1205 19:05:18.621777 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:20.621841 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:23.122245 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:25.620805 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:27.622238 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:30.121602 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:32.121693 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:32.620633 1007620 pod_ready.go:93] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:32.620660 1007620 pod_ready.go:82] duration metric: took 31.505082921s for pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:32.620674 1007620 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-plx8r" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:32.624989 1007620 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-plx8r" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:32.625009 1007620 pod_ready.go:82] duration metric: took 4.326672ms for pod "nvidia-device-plugin-daemonset-plx8r" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:32.625025 1007620 pod_ready.go:39] duration metric: took 1m10.822408846s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:05:32.625043 1007620 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:05:32.625074 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 19:05:32.625122 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 19:05:32.659425 1007620 cri.go:89] found id: "5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d"
	I1205 19:05:32.659448 1007620 cri.go:89] found id: ""
	I1205 19:05:32.659460 1007620 logs.go:282] 1 containers: [5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d]
	I1205 19:05:32.659508 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:32.662879 1007620 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 19:05:32.662925 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 19:05:32.695345 1007620 cri.go:89] found id: "c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4"
	I1205 19:05:32.695376 1007620 cri.go:89] found id: ""
	I1205 19:05:32.695386 1007620 logs.go:282] 1 containers: [c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4]
	I1205 19:05:32.695431 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:32.698636 1007620 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 19:05:32.698687 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 19:05:32.732474 1007620 cri.go:89] found id: "90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537"
	I1205 19:05:32.732500 1007620 cri.go:89] found id: ""
	I1205 19:05:32.732510 1007620 logs.go:282] 1 containers: [90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537]
	I1205 19:05:32.732560 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:32.735690 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 19:05:32.735749 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 19:05:32.767444 1007620 cri.go:89] found id: "c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248"
	I1205 19:05:32.767461 1007620 cri.go:89] found id: ""
	I1205 19:05:32.767468 1007620 logs.go:282] 1 containers: [c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248]
	I1205 19:05:32.767509 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:32.770588 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 19:05:32.770638 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 19:05:32.806516 1007620 cri.go:89] found id: "9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035"
	I1205 19:05:32.806537 1007620 cri.go:89] found id: ""
	I1205 19:05:32.806547 1007620 logs.go:282] 1 containers: [9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035]
	I1205 19:05:32.806605 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:32.810090 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 19:05:32.810168 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 19:05:32.844908 1007620 cri.go:89] found id: "a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121"
	I1205 19:05:32.844929 1007620 cri.go:89] found id: ""
	I1205 19:05:32.844936 1007620 logs.go:282] 1 containers: [a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121]
	I1205 19:05:32.844991 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:32.848282 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 19:05:32.848333 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 19:05:32.882335 1007620 cri.go:89] found id: "ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b"
	I1205 19:05:32.882366 1007620 cri.go:89] found id: ""
	I1205 19:05:32.882376 1007620 logs.go:282] 1 containers: [ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b]
	I1205 19:05:32.882427 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:32.885673 1007620 logs.go:123] Gathering logs for container status ...
	I1205 19:05:32.885700 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 19:05:32.925637 1007620 logs.go:123] Gathering logs for kubelet ...
	I1205 19:05:32.925668 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 19:05:33.008046 1007620 logs.go:123] Gathering logs for dmesg ...
	I1205 19:05:33.008086 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 19:05:33.022973 1007620 logs.go:123] Gathering logs for describe nodes ...
	I1205 19:05:33.023006 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 19:05:33.128124 1007620 logs.go:123] Gathering logs for etcd [c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4] ...
	I1205 19:05:33.128156 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4"
	I1205 19:05:33.184782 1007620 logs.go:123] Gathering logs for coredns [90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537] ...
	I1205 19:05:33.184825 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537"
	I1205 19:05:33.221070 1007620 logs.go:123] Gathering logs for kube-proxy [9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035] ...
	I1205 19:05:33.221099 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035"
	I1205 19:05:33.255020 1007620 logs.go:123] Gathering logs for kindnet [ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b] ...
	I1205 19:05:33.255047 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b"
	I1205 19:05:33.288620 1007620 logs.go:123] Gathering logs for kube-apiserver [5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d] ...
	I1205 19:05:33.288655 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d"
	I1205 19:05:33.332443 1007620 logs.go:123] Gathering logs for kube-scheduler [c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248] ...
	I1205 19:05:33.332484 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248"
	I1205 19:05:33.374559 1007620 logs.go:123] Gathering logs for kube-controller-manager [a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121] ...
	I1205 19:05:33.374606 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121"
	I1205 19:05:33.434201 1007620 logs.go:123] Gathering logs for CRI-O ...
	I1205 19:05:33.434236 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 19:05:36.008406 1007620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:05:36.022681 1007620 api_server.go:72] duration metric: took 1m33.464078966s to wait for apiserver process to appear ...
	I1205 19:05:36.022716 1007620 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:05:36.022764 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 19:05:36.022816 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 19:05:36.055742 1007620 cri.go:89] found id: "5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d"
	I1205 19:05:36.055767 1007620 cri.go:89] found id: ""
	I1205 19:05:36.055775 1007620 logs.go:282] 1 containers: [5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d]
	I1205 19:05:36.055823 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:36.058949 1007620 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 19:05:36.059020 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 19:05:36.091533 1007620 cri.go:89] found id: "c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4"
	I1205 19:05:36.091554 1007620 cri.go:89] found id: ""
	I1205 19:05:36.091563 1007620 logs.go:282] 1 containers: [c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4]
	I1205 19:05:36.091609 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:36.094777 1007620 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 19:05:36.094841 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 19:05:36.127303 1007620 cri.go:89] found id: "90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537"
	I1205 19:05:36.127327 1007620 cri.go:89] found id: ""
	I1205 19:05:36.127337 1007620 logs.go:282] 1 containers: [90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537]
	I1205 19:05:36.127392 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:36.130430 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 19:05:36.130491 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 19:05:36.162804 1007620 cri.go:89] found id: "c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248"
	I1205 19:05:36.162826 1007620 cri.go:89] found id: ""
	I1205 19:05:36.162834 1007620 logs.go:282] 1 containers: [c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248]
	I1205 19:05:36.162888 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:36.166019 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 19:05:36.166071 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 19:05:36.199412 1007620 cri.go:89] found id: "9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035"
	I1205 19:05:36.199435 1007620 cri.go:89] found id: ""
	I1205 19:05:36.199444 1007620 logs.go:282] 1 containers: [9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035]
	I1205 19:05:36.199496 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:36.202572 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 19:05:36.202627 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 19:05:36.235120 1007620 cri.go:89] found id: "a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121"
	I1205 19:05:36.235138 1007620 cri.go:89] found id: ""
	I1205 19:05:36.235145 1007620 logs.go:282] 1 containers: [a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121]
	I1205 19:05:36.235192 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:36.238488 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 19:05:36.238534 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 19:05:36.269611 1007620 cri.go:89] found id: "ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b"
	I1205 19:05:36.269631 1007620 cri.go:89] found id: ""
	I1205 19:05:36.269638 1007620 logs.go:282] 1 containers: [ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b]
	I1205 19:05:36.269675 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:36.272689 1007620 logs.go:123] Gathering logs for kubelet ...
	I1205 19:05:36.272710 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 19:05:36.350990 1007620 logs.go:123] Gathering logs for dmesg ...
	I1205 19:05:36.351025 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 19:05:36.364911 1007620 logs.go:123] Gathering logs for etcd [c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4] ...
	I1205 19:05:36.364943 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4"
	I1205 19:05:36.417855 1007620 logs.go:123] Gathering logs for coredns [90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537] ...
	I1205 19:05:36.417886 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537"
	I1205 19:05:36.455229 1007620 logs.go:123] Gathering logs for kube-scheduler [c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248] ...
	I1205 19:05:36.455255 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248"
	I1205 19:05:36.493301 1007620 logs.go:123] Gathering logs for kube-proxy [9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035] ...
	I1205 19:05:36.493344 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035"
	I1205 19:05:36.526017 1007620 logs.go:123] Gathering logs for kube-controller-manager [a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121] ...
	I1205 19:05:36.526045 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121"
	I1205 19:05:36.580188 1007620 logs.go:123] Gathering logs for kindnet [ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b] ...
	I1205 19:05:36.580216 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b"
	I1205 19:05:36.613958 1007620 logs.go:123] Gathering logs for describe nodes ...
	I1205 19:05:36.613988 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 19:05:36.710284 1007620 logs.go:123] Gathering logs for kube-apiserver [5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d] ...
	I1205 19:05:36.710313 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d"
	I1205 19:05:36.754194 1007620 logs.go:123] Gathering logs for CRI-O ...
	I1205 19:05:36.754225 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 19:05:36.831398 1007620 logs.go:123] Gathering logs for container status ...
	I1205 19:05:36.831428 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 19:05:39.374485 1007620 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:05:39.378078 1007620 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1205 19:05:39.378922 1007620 api_server.go:141] control plane version: v1.31.2
	I1205 19:05:39.378947 1007620 api_server.go:131] duration metric: took 3.356225004s to wait for apiserver health ...
	I1205 19:05:39.378958 1007620 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:05:39.378983 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 19:05:39.379029 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 19:05:39.413132 1007620 cri.go:89] found id: "5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d"
	I1205 19:05:39.413154 1007620 cri.go:89] found id: ""
	I1205 19:05:39.413164 1007620 logs.go:282] 1 containers: [5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d]
	I1205 19:05:39.413218 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:39.416438 1007620 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 19:05:39.416502 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 19:05:39.448862 1007620 cri.go:89] found id: "c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4"
	I1205 19:05:39.448882 1007620 cri.go:89] found id: ""
	I1205 19:05:39.448891 1007620 logs.go:282] 1 containers: [c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4]
	I1205 19:05:39.448944 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:39.452089 1007620 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 19:05:39.452151 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 19:05:39.484413 1007620 cri.go:89] found id: "90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537"
	I1205 19:05:39.484431 1007620 cri.go:89] found id: ""
	I1205 19:05:39.484440 1007620 logs.go:282] 1 containers: [90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537]
	I1205 19:05:39.484497 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:39.487757 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 19:05:39.487814 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 19:05:39.519715 1007620 cri.go:89] found id: "c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248"
	I1205 19:05:39.519732 1007620 cri.go:89] found id: ""
	I1205 19:05:39.519739 1007620 logs.go:282] 1 containers: [c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248]
	I1205 19:05:39.519777 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:39.522959 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 19:05:39.523017 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 19:05:39.555557 1007620 cri.go:89] found id: "9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035"
	I1205 19:05:39.555576 1007620 cri.go:89] found id: ""
	I1205 19:05:39.555585 1007620 logs.go:282] 1 containers: [9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035]
	I1205 19:05:39.555643 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:39.558787 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 19:05:39.558833 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 19:05:39.592188 1007620 cri.go:89] found id: "a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121"
	I1205 19:05:39.592211 1007620 cri.go:89] found id: ""
	I1205 19:05:39.592222 1007620 logs.go:282] 1 containers: [a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121]
	I1205 19:05:39.592268 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:39.595722 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 19:05:39.595774 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 19:05:39.628030 1007620 cri.go:89] found id: "ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b"
	I1205 19:05:39.628054 1007620 cri.go:89] found id: ""
	I1205 19:05:39.628064 1007620 logs.go:282] 1 containers: [ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b]
	I1205 19:05:39.628115 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:39.631581 1007620 logs.go:123] Gathering logs for describe nodes ...
	I1205 19:05:39.631600 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 19:05:39.728843 1007620 logs.go:123] Gathering logs for kube-scheduler [c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248] ...
	I1205 19:05:39.728872 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248"
	I1205 19:05:39.768774 1007620 logs.go:123] Gathering logs for kindnet [ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b] ...
	I1205 19:05:39.768803 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b"
	I1205 19:05:39.802910 1007620 logs.go:123] Gathering logs for container status ...
	I1205 19:05:39.802935 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 19:05:39.843291 1007620 logs.go:123] Gathering logs for kube-proxy [9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035] ...
	I1205 19:05:39.843351 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035"
	I1205 19:05:39.876339 1007620 logs.go:123] Gathering logs for kube-controller-manager [a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121] ...
	I1205 19:05:39.876376 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121"
	I1205 19:05:39.928825 1007620 logs.go:123] Gathering logs for CRI-O ...
	I1205 19:05:39.928853 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 19:05:40.004888 1007620 logs.go:123] Gathering logs for kubelet ...
	I1205 19:05:40.004920 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 19:05:40.090332 1007620 logs.go:123] Gathering logs for dmesg ...
	I1205 19:05:40.090365 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 19:05:40.104532 1007620 logs.go:123] Gathering logs for kube-apiserver [5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d] ...
	I1205 19:05:40.104559 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d"
	I1205 19:05:40.146894 1007620 logs.go:123] Gathering logs for etcd [c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4] ...
	I1205 19:05:40.146921 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4"
	I1205 19:05:40.199901 1007620 logs.go:123] Gathering logs for coredns [90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537] ...
	I1205 19:05:40.199931 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537"
	I1205 19:05:42.745306 1007620 system_pods.go:59] 19 kube-system pods found
	I1205 19:05:42.745351 1007620 system_pods.go:61] "amd-gpu-device-plugin-rkfpl" [84620fa8-2414-4aee-997e-77166e219e34] Running
	I1205 19:05:42.745363 1007620 system_pods.go:61] "coredns-7c65d6cfc9-7qzsp" [91584be4-8041-4112-9344-52666220752c] Running
	I1205 19:05:42.745370 1007620 system_pods.go:61] "csi-hostpath-attacher-0" [3c72118e-d60d-42f2-98ed-f4e52f0e0a81] Running
	I1205 19:05:42.745375 1007620 system_pods.go:61] "csi-hostpath-resizer-0" [64a6c0c0-a813-485f-9ef2-78f9abbe9238] Running
	I1205 19:05:42.745379 1007620 system_pods.go:61] "csi-hostpathplugin-5cqk7" [131a62a4-57eb-407f-8e80-a5c3d51538e4] Running
	I1205 19:05:42.745383 1007620 system_pods.go:61] "etcd-addons-792804" [89326821-882f-491c-b92e-b4c3600ac90d] Running
	I1205 19:05:42.745387 1007620 system_pods.go:61] "kindnet-pkvzp" [263300ed-730f-4582-b989-92eaf98b155c] Running
	I1205 19:05:42.745391 1007620 system_pods.go:61] "kube-apiserver-addons-792804" [c60d5060-a65c-4ad3-94e1-d21d0377635a] Running
	I1205 19:05:42.745398 1007620 system_pods.go:61] "kube-controller-manager-addons-792804" [a90cee14-6332-4eda-8ab1-70431bc0b27a] Running
	I1205 19:05:42.745405 1007620 system_pods.go:61] "kube-ingress-dns-minikube" [18c6b37a-2a4d-40d1-9317-1cb68ea321db] Running
	I1205 19:05:42.745412 1007620 system_pods.go:61] "kube-proxy-t8lq4" [41249f05-a6bb-4e11-a772-c813f49cce31] Running
	I1205 19:05:42.745416 1007620 system_pods.go:61] "kube-scheduler-addons-792804" [bf5949b2-510c-4c13-bd24-dfa68be6bab2] Running
	I1205 19:05:42.745419 1007620 system_pods.go:61] "metrics-server-84c5f94fbc-xvwfg" [cf42e4c4-04ee-4e87-95f2-32c2eb1a286a] Running
	I1205 19:05:42.745422 1007620 system_pods.go:61] "nvidia-device-plugin-daemonset-plx8r" [7f230e91-1177-4780-b554-91b9244f8abe] Running
	I1205 19:05:42.745428 1007620 system_pods.go:61] "registry-66c9cd494c-qh8j2" [4ed56af8-db58-447f-b533-cc510548cf01] Running
	I1205 19:05:42.745432 1007620 system_pods.go:61] "registry-proxy-5jm2x" [3066695f-7cd9-404c-b980-d75b005c5b47] Running
	I1205 19:05:42.745438 1007620 system_pods.go:61] "snapshot-controller-56fcc65765-5m8wt" [6c38ca6e-af09-4f1a-8803-1060c8ce24c7] Running
	I1205 19:05:42.745441 1007620 system_pods.go:61] "snapshot-controller-56fcc65765-xj8db" [31083923-e814-4a2d-a314-93ee4fdb3c83] Running
	I1205 19:05:42.745447 1007620 system_pods.go:61] "storage-provisioner" [d1534b1f-18b0-44a3-978d-5c2cfc6fe2df] Running
	I1205 19:05:42.745453 1007620 system_pods.go:74] duration metric: took 3.366486781s to wait for pod list to return data ...
	I1205 19:05:42.745464 1007620 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:05:42.748033 1007620 default_sa.go:45] found service account: "default"
	I1205 19:05:42.748055 1007620 default_sa.go:55] duration metric: took 2.582973ms for default service account to be created ...
	I1205 19:05:42.748065 1007620 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:05:42.758566 1007620 system_pods.go:86] 19 kube-system pods found
	I1205 19:05:42.758599 1007620 system_pods.go:89] "amd-gpu-device-plugin-rkfpl" [84620fa8-2414-4aee-997e-77166e219e34] Running
	I1205 19:05:42.758606 1007620 system_pods.go:89] "coredns-7c65d6cfc9-7qzsp" [91584be4-8041-4112-9344-52666220752c] Running
	I1205 19:05:42.758610 1007620 system_pods.go:89] "csi-hostpath-attacher-0" [3c72118e-d60d-42f2-98ed-f4e52f0e0a81] Running
	I1205 19:05:42.758614 1007620 system_pods.go:89] "csi-hostpath-resizer-0" [64a6c0c0-a813-485f-9ef2-78f9abbe9238] Running
	I1205 19:05:42.758618 1007620 system_pods.go:89] "csi-hostpathplugin-5cqk7" [131a62a4-57eb-407f-8e80-a5c3d51538e4] Running
	I1205 19:05:42.758622 1007620 system_pods.go:89] "etcd-addons-792804" [89326821-882f-491c-b92e-b4c3600ac90d] Running
	I1205 19:05:42.758626 1007620 system_pods.go:89] "kindnet-pkvzp" [263300ed-730f-4582-b989-92eaf98b155c] Running
	I1205 19:05:42.758630 1007620 system_pods.go:89] "kube-apiserver-addons-792804" [c60d5060-a65c-4ad3-94e1-d21d0377635a] Running
	I1205 19:05:42.758633 1007620 system_pods.go:89] "kube-controller-manager-addons-792804" [a90cee14-6332-4eda-8ab1-70431bc0b27a] Running
	I1205 19:05:42.758639 1007620 system_pods.go:89] "kube-ingress-dns-minikube" [18c6b37a-2a4d-40d1-9317-1cb68ea321db] Running
	I1205 19:05:42.758645 1007620 system_pods.go:89] "kube-proxy-t8lq4" [41249f05-a6bb-4e11-a772-c813f49cce31] Running
	I1205 19:05:42.758652 1007620 system_pods.go:89] "kube-scheduler-addons-792804" [bf5949b2-510c-4c13-bd24-dfa68be6bab2] Running
	I1205 19:05:42.758657 1007620 system_pods.go:89] "metrics-server-84c5f94fbc-xvwfg" [cf42e4c4-04ee-4e87-95f2-32c2eb1a286a] Running
	I1205 19:05:42.758664 1007620 system_pods.go:89] "nvidia-device-plugin-daemonset-plx8r" [7f230e91-1177-4780-b554-91b9244f8abe] Running
	I1205 19:05:42.758674 1007620 system_pods.go:89] "registry-66c9cd494c-qh8j2" [4ed56af8-db58-447f-b533-cc510548cf01] Running
	I1205 19:05:42.758680 1007620 system_pods.go:89] "registry-proxy-5jm2x" [3066695f-7cd9-404c-b980-d75b005c5b47] Running
	I1205 19:05:42.758690 1007620 system_pods.go:89] "snapshot-controller-56fcc65765-5m8wt" [6c38ca6e-af09-4f1a-8803-1060c8ce24c7] Running
	I1205 19:05:42.758700 1007620 system_pods.go:89] "snapshot-controller-56fcc65765-xj8db" [31083923-e814-4a2d-a314-93ee4fdb3c83] Running
	I1205 19:05:42.758710 1007620 system_pods.go:89] "storage-provisioner" [d1534b1f-18b0-44a3-978d-5c2cfc6fe2df] Running
	I1205 19:05:42.758722 1007620 system_pods.go:126] duration metric: took 10.651879ms to wait for k8s-apps to be running ...
	I1205 19:05:42.758733 1007620 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:05:42.758784 1007620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:05:42.770091 1007620 system_svc.go:56] duration metric: took 11.348262ms WaitForService to wait for kubelet
	I1205 19:05:42.770121 1007620 kubeadm.go:582] duration metric: took 1m40.211525134s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:05:42.770148 1007620 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:05:42.773225 1007620 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 19:05:42.773250 1007620 node_conditions.go:123] node cpu capacity is 8
	I1205 19:05:42.773264 1007620 node_conditions.go:105] duration metric: took 3.110045ms to run NodePressure ...
	I1205 19:05:42.773275 1007620 start.go:241] waiting for startup goroutines ...
	I1205 19:05:42.773282 1007620 start.go:246] waiting for cluster config update ...
	I1205 19:05:42.773298 1007620 start.go:255] writing updated cluster config ...
	I1205 19:05:42.773558 1007620 ssh_runner.go:195] Run: rm -f paused
	I1205 19:05:42.824943 1007620 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 19:05:42.827827 1007620 out.go:177] * Done! kubectl is now configured to use "addons-792804" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 19:07:57 addons-792804 crio[1029]: time="2024-12-05 19:07:57.908355446Z" level=info msg="Removed pod sandbox: 14ea1424b4e650e600a1382cc30ef6bc44f860096ae05c620896c21031b60f2e" id=53bedbe8-9a03-4970-b9e9-4c116db48d9b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.012945444Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-2f5lt/POD" id=e9ef3a4c-8627-4d9b-947e-4c9412d12268 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.013045726Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.038454954Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-2f5lt Namespace:default ID:ad45f6dec1cad0cd37493f0b198394cc6bc08f76df203092fb9929dda6f5c458 UID:828e3ad2-3c0a-4a4d-afdd-9e6ae040ae6e NetNS:/var/run/netns/287623ae-5c28-416c-862e-afdd860420b6 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.038489337Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-2f5lt to CNI network \"kindnet\" (type=ptp)"
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.084639587Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-2f5lt Namespace:default ID:ad45f6dec1cad0cd37493f0b198394cc6bc08f76df203092fb9929dda6f5c458 UID:828e3ad2-3c0a-4a4d-afdd-9e6ae040ae6e NetNS:/var/run/netns/287623ae-5c28-416c-862e-afdd860420b6 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.084828087Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-2f5lt for CNI network kindnet (type=ptp)"
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.087973500Z" level=info msg="Ran pod sandbox ad45f6dec1cad0cd37493f0b198394cc6bc08f76df203092fb9929dda6f5c458 with infra container: default/hello-world-app-55bf9c44b4-2f5lt/POD" id=e9ef3a4c-8627-4d9b-947e-4c9412d12268 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.089203059Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=eb421478-90a1-4b79-bb7c-24ce5c3bfc90 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.089463667Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=eb421478-90a1-4b79-bb7c-24ce5c3bfc90 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.090056317Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=6ad04f84-6d8f-4057-885b-a7dd4fb04bb5 name=/runtime.v1.ImageService/PullImage
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.094768348Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.327187477Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.885105711Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=6ad04f84-6d8f-4057-885b-a7dd4fb04bb5 name=/runtime.v1.ImageService/PullImage
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.885706624Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=b04065bb-d946-4b26-ae78-89aaa9389dfb name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.886318082Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b04065bb-d946-4b26-ae78-89aaa9389dfb name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.887031207Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=29afc975-fe03-4605-ab7b-6a44a701f981 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.887539757Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=29afc975-fe03-4605-ab7b-6a44a701f981 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.888228648Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-2f5lt/hello-world-app" id=fe3337f7-9869-4a5a-8ff6-a719517c7479 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.888321245Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.902558760Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8c1229760c032f1245ef28680c79c38075f1014b109279292211a297d903e580/merged/etc/passwd: no such file or directory"
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.902596450Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8c1229760c032f1245ef28680c79c38075f1014b109279292211a297d903e580/merged/etc/group: no such file or directory"
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.937563107Z" level=info msg="Created container a8f07e1e9dd57dc5a3a766e5d2fd4a5d640abcfe9c83a3a90de3f114da386dc4: default/hello-world-app-55bf9c44b4-2f5lt/hello-world-app" id=fe3337f7-9869-4a5a-8ff6-a719517c7479 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.938185300Z" level=info msg="Starting container: a8f07e1e9dd57dc5a3a766e5d2fd4a5d640abcfe9c83a3a90de3f114da386dc4" id=576a88f6-c83a-4a16-bca8-88f505b40772 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 19:08:37 addons-792804 crio[1029]: time="2024-12-05 19:08:37.944925716Z" level=info msg="Started container" PID=10987 containerID=a8f07e1e9dd57dc5a3a766e5d2fd4a5d640abcfe9c83a3a90de3f114da386dc4 description=default/hello-world-app-55bf9c44b4-2f5lt/hello-world-app id=576a88f6-c83a-4a16-bca8-88f505b40772 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad45f6dec1cad0cd37493f0b198394cc6bc08f76df203092fb9929dda6f5c458
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	a8f07e1e9dd57       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   ad45f6dec1cad       hello-world-app-55bf9c44b4-2f5lt
	3c1a849a5c9c6       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago            Running             nginx                     0                   6f556d546ee83       nginx
	b9b49e0899283       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago            Running             busybox                   0                   3fba920129fb4       busybox
	c0fa3985b0c7b       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago            Running             controller                0                   528eb18553023       ingress-nginx-controller-5f85ff4588-tvww9
	d98693d5c55c9       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago            Running             minikube-ingress-dns      0                   f4d193f9f67eb       kube-ingress-dns-minikube
	b112d9ec5f027       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago            Exited              patch                     0                   110e16e242775       ingress-nginx-admission-patch-dg96z
	bc7a0617a2988       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago            Exited              create                    0                   7f024a9e3d53f       ingress-nginx-admission-create-lmzlv
	1a7ee3d7b63fb       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago            Running             metrics-server            0                   e8a32a365df11       metrics-server-84c5f94fbc-xvwfg
	90f4d4feb8054       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago            Running             coredns                   0                   28431d5530807       coredns-7c65d6cfc9-7qzsp
	df26d92f96f4c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   3eb846c3498b1       storage-provisioner
	ba1ab1cc72f73       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16                           4 minutes ago            Running             kindnet-cni               0                   13c0fdc786cf7       kindnet-pkvzp
	9d82c3212e55e       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             4 minutes ago            Running             kube-proxy                0                   cf8473976cf44       kube-proxy-t8lq4
	c8f95dacee1a1       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             4 minutes ago            Running             kube-scheduler            0                   2a839613dd0a0       kube-scheduler-addons-792804
	a29ac131c53e9       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             4 minutes ago            Running             kube-controller-manager   0                   d33e4d58e65ba       kube-controller-manager-addons-792804
	5bc338ce05c4d       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             4 minutes ago            Running             kube-apiserver            0                   9077d86af1dd8       kube-apiserver-addons-792804
	c239002d50bbb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             4 minutes ago            Running             etcd                      0                   863973ad960f7       etcd-addons-792804
	
	
	==> coredns [90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537] <==
	[INFO] 10.244.0.12:47551 - 3192 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000124391s
	[INFO] 10.244.0.12:41758 - 12051 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.004333421s
	[INFO] 10.244.0.12:41758 - 12411 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.00489904s
	[INFO] 10.244.0.12:47100 - 1529 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004722195s
	[INFO] 10.244.0.12:47100 - 1846 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004804056s
	[INFO] 10.244.0.12:45299 - 60459 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004286632s
	[INFO] 10.244.0.12:45299 - 60189 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005832438s
	[INFO] 10.244.0.12:48627 - 8435 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000059241s
	[INFO] 10.244.0.12:48627 - 7985 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00007401s
	[INFO] 10.244.0.21:36218 - 22852 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000177612s
	[INFO] 10.244.0.21:39234 - 21709 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000206868s
	[INFO] 10.244.0.21:34468 - 39417 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00015468s
	[INFO] 10.244.0.21:42976 - 39038 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000223371s
	[INFO] 10.244.0.21:38749 - 24214 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108878s
	[INFO] 10.244.0.21:60779 - 211 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000145655s
	[INFO] 10.244.0.21:35281 - 39162 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.004719776s
	[INFO] 10.244.0.21:42462 - 61399 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.00474886s
	[INFO] 10.244.0.21:58937 - 45809 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004525416s
	[INFO] 10.244.0.21:55093 - 41890 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004806983s
	[INFO] 10.244.0.21:41512 - 16690 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005212357s
	[INFO] 10.244.0.21:37830 - 25053 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005305642s
	[INFO] 10.244.0.21:52191 - 24598 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.000727005s
	[INFO] 10.244.0.21:53195 - 24425 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00089881s
	[INFO] 10.244.0.25:34384 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000244303s
	[INFO] 10.244.0.25:58824 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000183885s
	
	
	==> describe nodes <==
	Name:               addons-792804
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-792804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=addons-792804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T19_03_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-792804
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:03:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-792804
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:08:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:07:02 +0000   Thu, 05 Dec 2024 19:03:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:07:02 +0000   Thu, 05 Dec 2024 19:03:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:07:02 +0000   Thu, 05 Dec 2024 19:03:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:07:02 +0000   Thu, 05 Dec 2024 19:04:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-792804
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 34807c83538b4fc29d7805a2fe8108b6
	  System UUID:                e34a953d-5e76-44c0-90b5-820e367e3919
	  Boot ID:                    63e29e64-0755-4812-a891-d8a092e25c6a
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  default                     hello-world-app-55bf9c44b4-2f5lt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-tvww9    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m30s
	  kube-system                 coredns-7c65d6cfc9-7qzsp                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m35s
	  kube-system                 etcd-addons-792804                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m41s
	  kube-system                 kindnet-pkvzp                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m35s
	  kube-system                 kube-apiserver-addons-792804                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-controller-manager-addons-792804        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-proxy-t8lq4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-scheduler-addons-792804                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 metrics-server-84c5f94fbc-xvwfg              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m30s  kube-proxy       
	  Normal   Starting                 4m41s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m41s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  4m41s  kubelet          Node addons-792804 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m41s  kubelet          Node addons-792804 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m41s  kubelet          Node addons-792804 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m37s  node-controller  Node addons-792804 event: Registered Node addons-792804 in Controller
	  Normal   NodeReady                4m17s  kubelet          Node addons-792804 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 5 19:06] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 82 8e ea 9e b9 ac 92 f1 e2 04 03 8f 08 00
	[  +1.011858] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 82 8e ea 9e b9 ac 92 f1 e2 04 03 8f 08 00
	[  +2.015843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 82 8e ea 9e b9 ac 92 f1 e2 04 03 8f 08 00
	[  +4.127715] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000049] ll header: 00000000: 82 8e ea 9e b9 ac 92 f1 e2 04 03 8f 08 00
	[  +8.191308] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 82 8e ea 9e b9 ac 92 f1 e2 04 03 8f 08 00
	[ +16.126709] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 82 8e ea 9e b9 ac 92 f1 e2 04 03 8f 08 00
	[Dec 5 19:07] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 82 8e ea 9e b9 ac 92 f1 e2 04 03 8f 08 00
	
	
	==> etcd [c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4] <==
	{"level":"info","ts":"2024-12-05T19:04:06.185679Z","caller":"traceutil/trace.go:171","msg":"trace[293223732] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"101.698691ms","start":"2024-12-05T19:04:06.083961Z","end":"2024-12-05T19:04:06.185660Z","steps":["trace[293223732] 'process raft request'  (duration: 100.412025ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.192472Z","caller":"traceutil/trace.go:171","msg":"trace[603909977] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"101.36148ms","start":"2024-12-05T19:04:06.091093Z","end":"2024-12-05T19:04:06.192455Z","steps":["trace[603909977] 'process raft request'  (duration: 93.358736ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.192765Z","caller":"traceutil/trace.go:171","msg":"trace[1484912059] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"100.783725ms","start":"2024-12-05T19:04:06.091966Z","end":"2024-12-05T19:04:06.192750Z","steps":["trace[1484912059] 'process raft request'  (duration: 100.078565ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:06.286496Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.435733ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-12-05T19:04:06.286579Z","caller":"traceutil/trace.go:171","msg":"trace[629605927] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:386; }","duration":"103.528821ms","start":"2024-12-05T19:04:06.183037Z","end":"2024-12-05T19:04:06.286566Z","steps":["trace[629605927] 'agreement among raft nodes before linearized reading'  (duration: 103.369868ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.397426Z","caller":"traceutil/trace.go:171","msg":"trace[496531437] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"101.697512ms","start":"2024-12-05T19:04:06.295697Z","end":"2024-12-05T19:04:06.397395Z","steps":["trace[496531437] 'process raft request'  (duration: 98.425183ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:06.796908Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.467696ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128033709431014813 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/specs/kube-system/registry\" mod_revision:0 > success:<request_put:<key:\"/registry/services/specs/kube-system/registry\" value_size:1479 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-12-05T19:04:06.797059Z","caller":"traceutil/trace.go:171","msg":"trace[1973021970] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"108.852591ms","start":"2024-12-05T19:04:06.688185Z","end":"2024-12-05T19:04:06.797037Z","steps":["trace[1973021970] 'process raft request'  (duration: 108.790074ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.797238Z","caller":"traceutil/trace.go:171","msg":"trace[270069390] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"112.57486ms","start":"2024-12-05T19:04:06.684653Z","end":"2024-12-05T19:04:06.797228Z","steps":["trace[270069390] 'compare'  (duration: 101.383089ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.797450Z","caller":"traceutil/trace.go:171","msg":"trace[1358575336] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"103.578883ms","start":"2024-12-05T19:04:06.693864Z","end":"2024-12-05T19:04:06.797442Z","steps":["trace[1358575336] 'process raft request'  (duration: 103.55187ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.797535Z","caller":"traceutil/trace.go:171","msg":"trace[1670737765] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"104.064006ms","start":"2024-12-05T19:04:06.693466Z","end":"2024-12-05T19:04:06.797530Z","steps":["trace[1670737765] 'process raft request'  (duration: 103.544568ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.797550Z","caller":"traceutil/trace.go:171","msg":"trace[124355398] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"103.877045ms","start":"2024-12-05T19:04:06.693670Z","end":"2024-12-05T19:04:06.797547Z","steps":["trace[124355398] 'process raft request'  (duration: 103.708387ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.797721Z","caller":"traceutil/trace.go:171","msg":"trace[364603730] linearizableReadLoop","detail":"{readStateIndex:433; appliedIndex:432; }","duration":"112.86766ms","start":"2024-12-05T19:04:06.684847Z","end":"2024-12-05T19:04:06.797715Z","steps":["trace[364603730] 'read index received'  (duration: 10.553926ms)","trace[364603730] 'applied index is now lower than readState.Index'  (duration: 102.312073ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T19:04:06.797764Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.908129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:04:06.798215Z","caller":"traceutil/trace.go:171","msg":"trace[1360580632] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:423; }","duration":"113.361144ms","start":"2024-12-05T19:04:06.684844Z","end":"2024-12-05T19:04:06.798205Z","steps":["trace[1360580632] 'agreement among raft nodes before linearized reading'  (duration: 112.888194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:06.876019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.138123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-12-05T19:04:06.876546Z","caller":"traceutil/trace.go:171","msg":"trace[671052996] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:428; }","duration":"181.666486ms","start":"2024-12-05T19:04:06.694861Z","end":"2024-12-05T19:04:06.876527Z","steps":["trace[671052996] 'agreement among raft nodes before linearized reading'  (duration: 181.076829ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.877176Z","caller":"traceutil/trace.go:171","msg":"trace[257066956] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"181.313478ms","start":"2024-12-05T19:04:06.695849Z","end":"2024-12-05T19:04:06.877162Z","steps":["trace[257066956] 'process raft request'  (duration: 179.856799ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.878126Z","caller":"traceutil/trace.go:171","msg":"trace[551061546] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"103.660851ms","start":"2024-12-05T19:04:06.774450Z","end":"2024-12-05T19:04:06.878111Z","steps":["trace[551061546] 'process raft request'  (duration: 101.322901ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.878392Z","caller":"traceutil/trace.go:171","msg":"trace[10943379] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"103.60684ms","start":"2024-12-05T19:04:06.774774Z","end":"2024-12-05T19:04:06.878381Z","steps":["trace[10943379] 'process raft request'  (duration: 101.034364ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.878600Z","caller":"traceutil/trace.go:171","msg":"trace[887548851] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"103.519342ms","start":"2024-12-05T19:04:06.775070Z","end":"2024-12-05T19:04:06.878589Z","steps":["trace[887548851] 'process raft request'  (duration: 100.778841ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:06.878690Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.982851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/default/cloud-spanner-emulator-dc5db94f4\" ","response":"range_response_count:1 size:2184"}
	{"level":"info","ts":"2024-12-05T19:04:06.880930Z","caller":"traceutil/trace.go:171","msg":"trace[30945718] range","detail":"{range_begin:/registry/replicasets/default/cloud-spanner-emulator-dc5db94f4; range_end:; response_count:1; response_revision:428; }","duration":"106.225649ms","start":"2024-12-05T19:04:06.774689Z","end":"2024-12-05T19:04:06.880915Z","steps":["trace[30945718] 'agreement among raft nodes before linearized reading'  (duration: 103.910227ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:06.879762Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.77615ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-t8lq4\" ","response":"range_response_count:1 size:4833"}
	{"level":"info","ts":"2024-12-05T19:04:06.881589Z","caller":"traceutil/trace.go:171","msg":"trace[117254968] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-t8lq4; range_end:; response_count:1; response_revision:428; }","duration":"106.604162ms","start":"2024-12-05T19:04:06.774971Z","end":"2024-12-05T19:04:06.881576Z","steps":["trace[117254968] 'agreement among raft nodes before linearized reading'  (duration: 104.746662ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:08:38 up 21:51,  0 users,  load average: 0.21, 23.25, 54.12
	Linux addons-792804 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b] <==
	I1205 19:06:31.575683       1 main.go:301] handling current node
	I1205 19:06:41.575640       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:06:41.575677       1 main.go:301] handling current node
	I1205 19:06:51.575968       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:06:51.576000       1 main.go:301] handling current node
	I1205 19:07:01.575642       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:07:01.575724       1 main.go:301] handling current node
	I1205 19:07:11.575982       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:07:11.576018       1 main.go:301] handling current node
	I1205 19:07:21.576205       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:07:21.576244       1 main.go:301] handling current node
	I1205 19:07:31.576744       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:07:31.576780       1 main.go:301] handling current node
	I1205 19:07:41.584815       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:07:41.584856       1 main.go:301] handling current node
	I1205 19:07:51.576269       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:07:51.576307       1 main.go:301] handling current node
	I1205 19:08:01.582078       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:08:01.582122       1 main.go:301] handling current node
	I1205 19:08:11.575560       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:08:11.575609       1 main.go:301] handling current node
	I1205 19:08:21.582070       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:08:21.582119       1 main.go:301] handling current node
	I1205 19:08:31.584327       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:08:31.584363       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d] <==
	 > logger="UnhandledError"
	I1205 19:05:37.220792       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1205 19:05:52.511485       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:32784: use of closed network connection
	E1205 19:05:52.690273       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:32816: use of closed network connection
	I1205 19:06:01.610691       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.167.102"}
	I1205 19:06:11.854864       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1205 19:06:12.882247       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1205 19:06:16.365839       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1205 19:06:16.535333       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.92.187"}
	I1205 19:06:39.419512       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1205 19:06:42.595738       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1205 19:06:57.261957       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:06:57.262048       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:06:57.274449       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:06:57.274584       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:06:57.275537       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:06:57.275650       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:06:57.289043       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:06:57.289090       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:06:57.389873       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:06:57.390029       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1205 19:06:58.275601       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1205 19:06:58.390720       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1205 19:06:58.401326       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1205 19:08:36.903046       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.203.174"}
	
	
	==> kube-controller-manager [a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121] <==
	E1205 19:07:18.598090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1205 19:07:19.841836       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I1205 19:07:21.355541       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-dc5db94f4" duration="5.575µs"
	W1205 19:07:31.450241       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:07:31.450286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:07:33.248806       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:07:33.248852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:07:38.823362       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:07:38.823413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:07:40.869380       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:07:40.869425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:08:00.661783       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:08:00.661829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:08:06.889616       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:08:06.889664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:08:09.755436       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:08:09.755481       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:08:18.907518       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:08:18.907581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1205 19:08:36.708502       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.131048ms"
	I1205 19:08:36.712623       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="4.070058ms"
	I1205 19:08:36.712705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.68µs"
	I1205 19:08:36.720009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.741µs"
	W1205 19:08:36.932995       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:08:36.933037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035] <==
	I1205 19:04:06.799742       1 server_linux.go:66] "Using iptables proxy"
	I1205 19:04:07.377637       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1205 19:04:07.377724       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 19:04:07.984833       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 19:04:07.984995       1 server_linux.go:169] "Using iptables Proxier"
	I1205 19:04:07.993033       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 19:04:07.993477       1 server.go:483] "Version info" version="v1.31.2"
	I1205 19:04:07.993550       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:04:07.995149       1 config.go:199] "Starting service config controller"
	I1205 19:04:07.995178       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 19:04:07.995218       1 config.go:105] "Starting endpoint slice config controller"
	I1205 19:04:07.995226       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 19:04:08.076711       1 config.go:328] "Starting node config controller"
	I1205 19:04:08.077398       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 19:04:08.099654       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 19:04:08.099764       1 shared_informer.go:320] Caches are synced for service config
	I1205 19:04:08.178333       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248] <==
	W1205 19:03:55.099983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 19:03:55.100002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.174269       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:55.174314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.175908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:03:55.175994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.176238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:03:55.176314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.176408       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:03:55.176442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.176410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:55.176477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.908182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 19:03:55.908229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.938598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:55.938633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.979992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 19:03:55.980026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:56.104980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:56.105018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:56.136297       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:03:56.136341       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 19:03:56.183653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:56.183695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1205 19:03:58.193231       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 19:08:36 addons-792804 kubelet[1637]: E1205 19:08:36.711015    1637 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d081a67-f0f9-4bf6-8141-e273c6dcc333" containerName="task-pv-container"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: E1205 19:08:36.711025    1637 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42c7e93e-0b31-4707-95e9-1f0ddcc9f67d" containerName="yakd"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: E1205 19:08:36.711033    1637 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37dc31d6-a34c-4a1d-a8a5-3cca85e28189" containerName="local-path-provisioner"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: E1205 19:08:36.711041    1637 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="131a62a4-57eb-407f-8e80-a5c3d51538e4" containerName="liveness-probe"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: E1205 19:08:36.711049    1637 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31083923-e814-4a2d-a314-93ee4fdb3c83" containerName="volume-snapshot-controller"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: E1205 19:08:36.711057    1637 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c72118e-d60d-42f2-98ed-f4e52f0e0a81" containerName="csi-attacher"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: E1205 19:08:36.711065    1637 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c38ca6e-af09-4f1a-8803-1060c8ce24c7" containerName="volume-snapshot-controller"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.711133    1637 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c38ca6e-af09-4f1a-8803-1060c8ce24c7" containerName="volume-snapshot-controller"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.711143    1637 memory_manager.go:354] "RemoveStaleState removing state" podUID="31083923-e814-4a2d-a314-93ee4fdb3c83" containerName="volume-snapshot-controller"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.711155    1637 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f230e91-1177-4780-b554-91b9244f8abe" containerName="nvidia-device-plugin-ctr"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.711163    1637 memory_manager.go:354] "RemoveStaleState removing state" podUID="131a62a4-57eb-407f-8e80-a5c3d51538e4" containerName="liveness-probe"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.711171    1637 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dc31d6-a34c-4a1d-a8a5-3cca85e28189" containerName="local-path-provisioner"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.711179    1637 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d081a67-f0f9-4bf6-8141-e273c6dcc333" containerName="task-pv-container"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.711187    1637 memory_manager.go:354] "RemoveStaleState removing state" podUID="131a62a4-57eb-407f-8e80-a5c3d51538e4" containerName="node-driver-registrar"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.711198    1637 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a6c0c0-a813-485f-9ef2-78f9abbe9238" containerName="csi-resizer"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.711206    1637 memory_manager.go:354] "RemoveStaleState removing state" podUID="131a62a4-57eb-407f-8e80-a5c3d51538e4" containerName="csi-provisioner"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.711214    1637 memory_manager.go:354] "RemoveStaleState removing state" podUID="131a62a4-57eb-407f-8e80-a5c3d51538e4" containerName="csi-snapshotter"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.711222    1637 memory_manager.go:354] "RemoveStaleState removing state" podUID="e333b8dd-e7c4-4204-b6ed-10687ba7e18d" containerName="cloud-spanner-emulator"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.711231    1637 memory_manager.go:354] "RemoveStaleState removing state" podUID="131a62a4-57eb-407f-8e80-a5c3d51538e4" containerName="hostpath"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.711239    1637 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c72118e-d60d-42f2-98ed-f4e52f0e0a81" containerName="csi-attacher"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.711247    1637 memory_manager.go:354] "RemoveStaleState removing state" podUID="131a62a4-57eb-407f-8e80-a5c3d51538e4" containerName="csi-external-health-monitor-controller"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.711256    1637 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c7e93e-0b31-4707-95e9-1f0ddcc9f67d" containerName="yakd"
	Dec 05 19:08:36 addons-792804 kubelet[1637]: I1205 19:08:36.873213    1637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbr8r\" (UniqueName: \"kubernetes.io/projected/828e3ad2-3c0a-4a4d-afdd-9e6ae040ae6e-kube-api-access-gbr8r\") pod \"hello-world-app-55bf9c44b4-2f5lt\" (UID: \"828e3ad2-3c0a-4a4d-afdd-9e6ae040ae6e\") " pod="default/hello-world-app-55bf9c44b4-2f5lt"
	Dec 05 19:08:37 addons-792804 kubelet[1637]: E1205 19:08:37.638081    1637 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425717637848232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617934,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:08:37 addons-792804 kubelet[1637]: E1205 19:08:37.638115    1637 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425717637848232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617934,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [df26d92f96f4c720f762cb6554d0e782091843c51977a22bf90820c2cd4cef04] <==
	I1205 19:04:22.699229       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 19:04:22.708323       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 19:04:22.708436       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 19:04:22.721942       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 19:04:22.722587       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-792804_39a82853-bfdf-4929-b403-b15ceaa0319b!
	I1205 19:04:22.722709       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b0e8ed1c-7e3e-4708-a1c4-e02553bc7cd5", APIVersion:"v1", ResourceVersion:"892", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-792804_39a82853-bfdf-4929-b403-b15ceaa0319b became leader
	I1205 19:04:22.824646       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-792804_39a82853-bfdf-4929-b403-b15ceaa0319b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-792804 -n addons-792804
helpers_test.go:261: (dbg) Run:  kubectl --context addons-792804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-lmzlv ingress-nginx-admission-patch-dg96z
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-792804 describe pod ingress-nginx-admission-create-lmzlv ingress-nginx-admission-patch-dg96z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-792804 describe pod ingress-nginx-admission-create-lmzlv ingress-nginx-admission-patch-dg96z: exit status 1 (57.741328ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lmzlv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dg96z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-792804 describe pod ingress-nginx-admission-create-lmzlv ingress-nginx-admission-patch-dg96z: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-792804 addons disable ingress --alsologtostderr -v=1: (7.756796219s)
--- FAIL: TestAddons/parallel/Ingress (151.51s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (351.52s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.103104ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-xvwfg" [cf42e4c4-04ee-4e87-95f2-32c2eb1a286a] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003217309s
addons_test.go:402: (dbg) Run:  kubectl --context addons-792804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-792804 top pods -n kube-system: exit status 1 (82.212915ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7qzsp, age: 2m3.021337292s

                                                
                                                
** /stderr **
I1205 19:06:06.024075 1006315 retry.go:31] will retry after 2.391142804s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-792804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-792804 top pods -n kube-system: exit status 1 (67.426067ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7qzsp, age: 2m5.480941172s

                                                
                                                
** /stderr **
I1205 19:06:08.483679 1006315 retry.go:31] will retry after 6.169843395s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-792804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-792804 top pods -n kube-system: exit status 1 (64.920688ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7qzsp, age: 2m11.716942593s

                                                
                                                
** /stderr **
I1205 19:06:14.719634 1006315 retry.go:31] will retry after 10.049210582s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-792804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-792804 top pods -n kube-system: exit status 1 (64.303705ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7qzsp, age: 2m21.831316286s

                                                
                                                
** /stderr **
I1205 19:06:24.834183 1006315 retry.go:31] will retry after 5.068789728s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-792804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-792804 top pods -n kube-system: exit status 1 (98.06239ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7qzsp, age: 2m26.999500955s

                                                
                                                
** /stderr **
I1205 19:06:30.002015 1006315 retry.go:31] will retry after 11.46794857s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-792804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-792804 top pods -n kube-system: exit status 1 (73.310847ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7qzsp, age: 2m38.540705155s

                                                
                                                
** /stderr **
I1205 19:06:41.543536 1006315 retry.go:31] will retry after 21.900811286s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-792804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-792804 top pods -n kube-system: exit status 1 (61.70737ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7qzsp, age: 3m0.503890209s

                                                
                                                
** /stderr **
I1205 19:07:03.506698 1006315 retry.go:31] will retry after 20.711013412s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-792804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-792804 top pods -n kube-system: exit status 1 (62.055374ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7qzsp, age: 3m21.278142193s

                                                
                                                
** /stderr **
I1205 19:07:24.280820 1006315 retry.go:31] will retry after 34.778077619s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-792804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-792804 top pods -n kube-system: exit status 1 (78.9417ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7qzsp, age: 3m56.138752874s

                                                
                                                
** /stderr **
I1205 19:07:59.141219 1006315 retry.go:31] will retry after 46.302082337s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-792804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-792804 top pods -n kube-system: exit status 1 (73.072176ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7qzsp, age: 4m42.514163086s

                                                
                                                
** /stderr **
I1205 19:08:45.516953 1006315 retry.go:31] will retry after 1m8.594335138s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-792804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-792804 top pods -n kube-system: exit status 1 (73.515683ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7qzsp, age: 5m51.176075661s

                                                
                                                
** /stderr **
I1205 19:09:54.185629 1006315 retry.go:31] will retry after 51.393916421s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-792804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-792804 top pods -n kube-system: exit status 1 (62.884607ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7qzsp, age: 6m42.640316905s

                                                
                                                
** /stderr **
I1205 19:10:45.643305 1006315 retry.go:31] will retry after 1m4.378196954s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-792804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-792804 top pods -n kube-system: exit status 1 (62.029376ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7qzsp, age: 7m47.085769611s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-792804
helpers_test.go:235: (dbg) docker inspect addons-792804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "151493fe4197d437d0f42c889e92bf6474546eef6b7938c6592bbbd95f23401f",
	        "Created": "2024-12-05T19:03:41.28736008Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1008381,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-05T19:03:41.416115782Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/151493fe4197d437d0f42c889e92bf6474546eef6b7938c6592bbbd95f23401f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/151493fe4197d437d0f42c889e92bf6474546eef6b7938c6592bbbd95f23401f/hostname",
	        "HostsPath": "/var/lib/docker/containers/151493fe4197d437d0f42c889e92bf6474546eef6b7938c6592bbbd95f23401f/hosts",
	        "LogPath": "/var/lib/docker/containers/151493fe4197d437d0f42c889e92bf6474546eef6b7938c6592bbbd95f23401f/151493fe4197d437d0f42c889e92bf6474546eef6b7938c6592bbbd95f23401f-json.log",
	        "Name": "/addons-792804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-792804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-792804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9ea643850fb58865d701e1a36a7e9243c7873ab50c6754c5ac6c25b0fc6a16bd-init/diff:/var/lib/docker/overlay2/eeb994da5272b5c43f59ac5fc7f49f2b48f722f8f3da0a9c9746c4ff0b32901d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9ea643850fb58865d701e1a36a7e9243c7873ab50c6754c5ac6c25b0fc6a16bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9ea643850fb58865d701e1a36a7e9243c7873ab50c6754c5ac6c25b0fc6a16bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9ea643850fb58865d701e1a36a7e9243c7873ab50c6754c5ac6c25b0fc6a16bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-792804",
	                "Source": "/var/lib/docker/volumes/addons-792804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-792804",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-792804",
	                "name.minikube.sigs.k8s.io": "addons-792804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4822d7e0338dcd0d2529d3e4389ec00e9dd766e359b49100abae0e97270fd059",
	            "SandboxKey": "/var/run/docker/netns/4822d7e0338d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-792804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "230e1683a1408d00d56e983c87473d02202929efd49ac915a0c10e139c694e7e",
	                    "EndpointID": "9f5a5ad0fa823cd2b2ca76841ca932e97c39ebc3eb0f80db6e085da1e5bb76bd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-792804",
	                        "151493fe4197"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-792804 -n addons-792804
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-792804 logs -n 25: (1.075607391s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-901904                                                                   | download-docker-901904 | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC | 05 Dec 24 19:03 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-857636   | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC |                     |
	|         | binary-mirror-857636                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35259                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-857636                                                                     | binary-mirror-857636   | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC | 05 Dec 24 19:03 UTC |
	| addons  | disable dashboard -p                                                                        | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC |                     |
	|         | addons-792804                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC |                     |
	|         | addons-792804                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-792804 --wait=true                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC | 05 Dec 24 19:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:05 UTC | 05 Dec 24 19:05 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:05 UTC | 05 Dec 24 19:06 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | -p addons-792804                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-792804 addons                                                                        | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-792804 ip                                                                            | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-792804 ssh curl -s                                                                   | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ssh     | addons-792804 ssh cat                                                                       | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | /opt/local-path-provisioner/pvc-fdbafd17-1365-40d4-95e6-83d3408a157a_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:07 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-792804 addons                                                                        | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-792804 addons                                                                        | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:07 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:07 UTC | 05 Dec 24 19:07 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-792804 addons                                                                        | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:07 UTC | 05 Dec 24 19:07 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-792804 addons                                                                        | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:07 UTC | 05 Dec 24 19:07 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-792804 ip                                                                            | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:08 UTC | 05 Dec 24 19:08 UTC |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:08 UTC | 05 Dec 24 19:08 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-792804 addons disable                                                                | addons-792804          | jenkins | v1.34.0 | 05 Dec 24 19:08 UTC | 05 Dec 24 19:08 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:03:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:03:19.361609 1007620 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:03:19.361876 1007620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:03:19.361887 1007620 out.go:358] Setting ErrFile to fd 2...
	I1205 19:03:19.361891 1007620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:03:19.362126 1007620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
	I1205 19:03:19.362745 1007620 out.go:352] Setting JSON to false
	I1205 19:03:19.363654 1007620 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":78350,"bootTime":1733347049,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:03:19.363763 1007620 start.go:139] virtualization: kvm guest
	I1205 19:03:19.366003 1007620 out.go:177] * [addons-792804] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:03:19.367413 1007620 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:03:19.367471 1007620 notify.go:220] Checking for updates...
	I1205 19:03:19.369671 1007620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:03:19.370869 1007620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig
	I1205 19:03:19.371998 1007620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube
	I1205 19:03:19.373183 1007620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:03:19.374350 1007620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:03:19.375556 1007620 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:03:19.397781 1007620 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 19:03:19.397911 1007620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:03:19.444934 1007620 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-05 19:03:19.436368323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:03:19.445049 1007620 docker.go:318] overlay module found
	I1205 19:03:19.446834 1007620 out.go:177] * Using the docker driver based on user configuration
	I1205 19:03:19.447976 1007620 start.go:297] selected driver: docker
	I1205 19:03:19.447989 1007620 start.go:901] validating driver "docker" against <nil>
	I1205 19:03:19.448001 1007620 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:03:19.448796 1007620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:03:19.495477 1007620 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-05 19:03:19.486453627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:03:19.495718 1007620 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 19:03:19.496007 1007620 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:03:19.497657 1007620 out.go:177] * Using Docker driver with root privileges
	I1205 19:03:19.498817 1007620 cni.go:84] Creating CNI manager for ""
	I1205 19:03:19.498879 1007620 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:03:19.498889 1007620 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:03:19.498945 1007620 start.go:340] cluster config:
	{Name:addons-792804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-792804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:03:19.500427 1007620 out.go:177] * Starting "addons-792804" primary control-plane node in "addons-792804" cluster
	I1205 19:03:19.501535 1007620 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:03:19.502611 1007620 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1205 19:03:19.503617 1007620 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:03:19.503646 1007620 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:03:19.503677 1007620 cache.go:56] Caching tarball of preloaded images
	I1205 19:03:19.503722 1007620 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1205 19:03:19.503759 1007620 preload.go:172] Found /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:03:19.503769 1007620 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:03:19.504083 1007620 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/config.json ...
	I1205 19:03:19.504114 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/config.json: {Name:mk9d633c942a45e5afc8a11b162149a265a14aca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:19.519545 1007620 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1205 19:03:19.519666 1007620 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1205 19:03:19.519681 1007620 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1205 19:03:19.519685 1007620 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1205 19:03:19.519692 1007620 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1205 19:03:19.519699 1007620 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from local cache
	I1205 19:03:31.339875 1007620 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from cached tarball
	I1205 19:03:31.339924 1007620 cache.go:194] Successfully downloaded all kic artifacts
	I1205 19:03:31.339974 1007620 start.go:360] acquireMachinesLock for addons-792804: {Name:mk10d4262ee22036cc298cfe9235901baa45df31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:03:31.340639 1007620 start.go:364] duration metric: took 641.892µs to acquireMachinesLock for "addons-792804"
	I1205 19:03:31.340666 1007620 start.go:93] Provisioning new machine with config: &{Name:addons-792804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-792804 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:03:31.340739 1007620 start.go:125] createHost starting for "" (driver="docker")
	I1205 19:03:31.342425 1007620 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1205 19:03:31.342651 1007620 start.go:159] libmachine.API.Create for "addons-792804" (driver="docker")
	I1205 19:03:31.342697 1007620 client.go:168] LocalClient.Create starting
	I1205 19:03:31.342783 1007620 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem
	I1205 19:03:31.546337 1007620 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem
	I1205 19:03:31.683901 1007620 cli_runner.go:164] Run: docker network inspect addons-792804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 19:03:31.700177 1007620 cli_runner.go:211] docker network inspect addons-792804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 19:03:31.700271 1007620 network_create.go:284] running [docker network inspect addons-792804] to gather additional debugging logs...
	I1205 19:03:31.700297 1007620 cli_runner.go:164] Run: docker network inspect addons-792804
	W1205 19:03:31.715260 1007620 cli_runner.go:211] docker network inspect addons-792804 returned with exit code 1
	I1205 19:03:31.715298 1007620 network_create.go:287] error running [docker network inspect addons-792804]: docker network inspect addons-792804: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-792804 not found
	I1205 19:03:31.715315 1007620 network_create.go:289] output of [docker network inspect addons-792804]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-792804 not found
	
	** /stderr **
	I1205 19:03:31.715436 1007620 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:03:31.731748 1007620 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cd0bc0}
	I1205 19:03:31.731806 1007620 network_create.go:124] attempt to create docker network addons-792804 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 19:03:31.731854 1007620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-792804 addons-792804
	I1205 19:03:31.790499 1007620 network_create.go:108] docker network addons-792804 192.168.49.0/24 created
	I1205 19:03:31.790527 1007620 kic.go:121] calculated static IP "192.168.49.2" for the "addons-792804" container
	I1205 19:03:31.790611 1007620 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 19:03:31.806982 1007620 cli_runner.go:164] Run: docker volume create addons-792804 --label name.minikube.sigs.k8s.io=addons-792804 --label created_by.minikube.sigs.k8s.io=true
	I1205 19:03:31.822909 1007620 oci.go:103] Successfully created a docker volume addons-792804
	I1205 19:03:31.822978 1007620 cli_runner.go:164] Run: docker run --rm --name addons-792804-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-792804 --entrypoint /usr/bin/test -v addons-792804:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
	I1205 19:03:36.809152 1007620 cli_runner.go:217] Completed: docker run --rm --name addons-792804-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-792804 --entrypoint /usr/bin/test -v addons-792804:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib: (4.986125676s)
	I1205 19:03:36.809185 1007620 oci.go:107] Successfully prepared a docker volume addons-792804
	I1205 19:03:36.809225 1007620 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:03:36.809256 1007620 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 19:03:36.809337 1007620 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-792804:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 19:03:41.227999 1007620 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-792804:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.418608225s)
	I1205 19:03:41.228034 1007620 kic.go:203] duration metric: took 4.418776675s to extract preloaded images to volume ...
	W1205 19:03:41.228173 1007620 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 19:03:41.228269 1007620 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 19:03:41.272773 1007620 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-792804 --name addons-792804 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-792804 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-792804 --network addons-792804 --ip 192.168.49.2 --volume addons-792804:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
	I1205 19:03:41.593370 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Running}}
	I1205 19:03:41.611024 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:03:41.628384 1007620 cli_runner.go:164] Run: docker exec addons-792804 stat /var/lib/dpkg/alternatives/iptables
	I1205 19:03:41.666564 1007620 oci.go:144] the created container "addons-792804" has a running status.
	I1205 19:03:41.666597 1007620 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa...
	I1205 19:03:41.766282 1007620 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 19:03:41.785812 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:03:41.801596 1007620 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 19:03:41.801615 1007620 kic_runner.go:114] Args: [docker exec --privileged addons-792804 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 19:03:41.844630 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:03:41.860857 1007620 machine.go:93] provisionDockerMachine start ...
	I1205 19:03:41.860946 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:41.877934 1007620 main.go:141] libmachine: Using SSH client type: native
	I1205 19:03:41.878228 1007620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 19:03:41.878248 1007620 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 19:03:41.878943 1007620 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42946->127.0.0.1:32768: read: connection reset by peer
	I1205 19:03:45.005423 1007620 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-792804
	
	I1205 19:03:45.005458 1007620 ubuntu.go:169] provisioning hostname "addons-792804"
	I1205 19:03:45.005516 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:45.022345 1007620 main.go:141] libmachine: Using SSH client type: native
	I1205 19:03:45.022530 1007620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 19:03:45.022543 1007620 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-792804 && echo "addons-792804" | sudo tee /etc/hostname
	I1205 19:03:45.156927 1007620 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-792804
	
	I1205 19:03:45.157040 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:45.174827 1007620 main.go:141] libmachine: Using SSH client type: native
	I1205 19:03:45.175005 1007620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 19:03:45.175031 1007620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-792804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-792804/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-792804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:03:45.297907 1007620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:03:45.297937 1007620 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20052-999445/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-999445/.minikube}
	I1205 19:03:45.297975 1007620 ubuntu.go:177] setting up certificates
	I1205 19:03:45.298007 1007620 provision.go:84] configureAuth start
	I1205 19:03:45.298078 1007620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-792804
	I1205 19:03:45.314966 1007620 provision.go:143] copyHostCerts
	I1205 19:03:45.315045 1007620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-999445/.minikube/ca.pem (1082 bytes)
	I1205 19:03:45.315160 1007620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-999445/.minikube/cert.pem (1123 bytes)
	I1205 19:03:45.315229 1007620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-999445/.minikube/key.pem (1675 bytes)
	I1205 19:03:45.315307 1007620 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca-key.pem org=jenkins.addons-792804 san=[127.0.0.1 192.168.49.2 addons-792804 localhost minikube]
	I1205 19:03:45.451014 1007620 provision.go:177] copyRemoteCerts
	I1205 19:03:45.451088 1007620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:03:45.451142 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:45.467782 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:03:45.558163 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 19:03:45.579143 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 19:03:45.600196 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 19:03:45.620351 1007620 provision.go:87] duration metric: took 322.327679ms to configureAuth
	I1205 19:03:45.620377 1007620 ubuntu.go:193] setting minikube options for container-runtime
	I1205 19:03:45.620538 1007620 config.go:182] Loaded profile config "addons-792804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:03:45.620642 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:45.636471 1007620 main.go:141] libmachine: Using SSH client type: native
	I1205 19:03:45.636632 1007620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 19:03:45.636648 1007620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:03:45.848877 1007620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:03:45.848909 1007620 machine.go:96] duration metric: took 3.988029585s to provisionDockerMachine
	I1205 19:03:45.848920 1007620 client.go:171] duration metric: took 14.506213736s to LocalClient.Create
	I1205 19:03:45.848940 1007620 start.go:167] duration metric: took 14.506288291s to libmachine.API.Create "addons-792804"
	I1205 19:03:45.848952 1007620 start.go:293] postStartSetup for "addons-792804" (driver="docker")
	I1205 19:03:45.848967 1007620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:03:45.849024 1007620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:03:45.849060 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:45.866377 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:03:45.962435 1007620 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:03:45.965267 1007620 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 19:03:45.965304 1007620 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 19:03:45.965320 1007620 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 19:03:45.965330 1007620 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1205 19:03:45.965343 1007620 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-999445/.minikube/addons for local assets ...
	I1205 19:03:45.965397 1007620 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-999445/.minikube/files for local assets ...
	I1205 19:03:45.965423 1007620 start.go:296] duration metric: took 116.464033ms for postStartSetup
	I1205 19:03:45.965678 1007620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-792804
	I1205 19:03:45.982453 1007620 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/config.json ...
	I1205 19:03:45.982677 1007620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:03:45.982719 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:45.997777 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:03:46.086573 1007620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 19:03:46.090799 1007620 start.go:128] duration metric: took 14.750046609s to createHost
	I1205 19:03:46.090826 1007620 start.go:83] releasing machines lock for "addons-792804", held for 14.750173592s
	I1205 19:03:46.090895 1007620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-792804
	I1205 19:03:46.107485 1007620 ssh_runner.go:195] Run: cat /version.json
	I1205 19:03:46.107531 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:46.107574 1007620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:03:46.107650 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:03:46.124478 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:03:46.125448 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:03:46.279554 1007620 ssh_runner.go:195] Run: systemctl --version
	I1205 19:03:46.283519 1007620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:03:46.419961 1007620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 19:03:46.424281 1007620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:03:46.441038 1007620 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 19:03:46.441087 1007620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:03:46.465976 1007620 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1205 19:03:46.466023 1007620 start.go:495] detecting cgroup driver to use...
	I1205 19:03:46.466059 1007620 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 19:03:46.466125 1007620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:03:46.480086 1007620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:03:46.489326 1007620 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:03:46.489366 1007620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:03:46.500864 1007620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:03:46.512719 1007620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:03:46.592208 1007620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:03:46.671761 1007620 docker.go:233] disabling docker service ...
	I1205 19:03:46.671821 1007620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:03:46.689293 1007620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:03:46.699058 1007620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:03:46.777544 1007620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:03:46.856047 1007620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:03:46.865855 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:03:46.879735 1007620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:03:46.879805 1007620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:03:46.888082 1007620 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:03:46.888139 1007620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:03:46.896292 1007620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:03:46.904335 1007620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:03:46.912466 1007620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:03:46.920151 1007620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:03:46.928160 1007620 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:03:46.941250 1007620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:03:46.949265 1007620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:03:46.956470 1007620 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:03:46.956517 1007620 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:03:46.968412 1007620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:03:46.976186 1007620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:03:47.047208 1007620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:03:47.138773 1007620 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:03:47.138857 1007620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:03:47.142285 1007620 start.go:563] Will wait 60s for crictl version
	I1205 19:03:47.142333 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:03:47.145252 1007620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:03:47.177592 1007620 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 19:03:47.177679 1007620 ssh_runner.go:195] Run: crio --version
	I1205 19:03:47.210525 1007620 ssh_runner.go:195] Run: crio --version
	I1205 19:03:47.244272 1007620 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1205 19:03:47.245513 1007620 cli_runner.go:164] Run: docker network inspect addons-792804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:03:47.261863 1007620 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 19:03:47.265346 1007620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:03:47.275328 1007620 kubeadm.go:883] updating cluster {Name:addons-792804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-792804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 19:03:47.275439 1007620 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:03:47.275480 1007620 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:03:47.340625 1007620 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:03:47.340651 1007620 crio.go:433] Images already preloaded, skipping extraction
	I1205 19:03:47.340708 1007620 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:03:47.371963 1007620 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:03:47.371988 1007620 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:03:47.371999 1007620 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1205 19:03:47.372122 1007620 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-792804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-792804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:03:47.372207 1007620 ssh_runner.go:195] Run: crio config
	I1205 19:03:47.412269 1007620 cni.go:84] Creating CNI manager for ""
	I1205 19:03:47.412290 1007620 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:03:47.412301 1007620 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 19:03:47.412325 1007620 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-792804 NodeName:addons-792804 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:03:47.412482 1007620 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-792804"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:03:47.412555 1007620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:03:47.420660 1007620 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:03:47.420710 1007620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 19:03:47.428022 1007620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1205 19:03:47.443248 1007620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:03:47.458550 1007620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1205 19:03:47.473278 1007620 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 19:03:47.476114 1007620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:03:47.485217 1007620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:03:47.554279 1007620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:03:47.565710 1007620 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804 for IP: 192.168.49.2
	I1205 19:03:47.565741 1007620 certs.go:194] generating shared ca certs ...
	I1205 19:03:47.565767 1007620 certs.go:226] acquiring lock for ca certs: {Name:mk27706fe4627f850c07ffcdfc76cdd3f60bd8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:47.565887 1007620 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key
	I1205 19:03:48.115880 1007620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt ...
	I1205 19:03:48.115916 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt: {Name:mkd39417c4cc8ca1b9b6fcb39e8efed056212001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.116102 1007620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key ...
	I1205 19:03:48.116118 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key: {Name:mk695f58db5d52d7c0027448e60494b13134bb2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.116195 1007620 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key
	I1205 19:03:48.194719 1007620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.crt ...
	I1205 19:03:48.194746 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.crt: {Name:mk3e9d1f62ee9c100c195c9fb75a0f6fc7801ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.194908 1007620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key ...
	I1205 19:03:48.194919 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key: {Name:mk83c7b827b2002819a89d4eadf05e2df95b9691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.195838 1007620 certs.go:256] generating profile certs ...
	I1205 19:03:48.195921 1007620 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.key
	I1205 19:03:48.195936 1007620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt with IP's: []
	I1205 19:03:48.454671 1007620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt ...
	I1205 19:03:48.454699 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: {Name:mk19e37b5ba8af69968fdb70a6516b1c1949315c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.454861 1007620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.key ...
	I1205 19:03:48.454872 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.key: {Name:mk89f0f5e6cc05ec6c7365db3a020f83ffeabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.455627 1007620 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.key.743a5472
	I1205 19:03:48.455648 1007620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.crt.743a5472 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1205 19:03:48.564436 1007620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.crt.743a5472 ...
	I1205 19:03:48.564466 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.crt.743a5472: {Name:mk94c52999036ab21c334b181980b7208d83c549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.565286 1007620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.key.743a5472 ...
	I1205 19:03:48.565303 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.key.743a5472: {Name:mkb0997134fef5b507458a367d95814cd530319c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.565792 1007620 certs.go:381] copying /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.crt.743a5472 -> /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.crt
	I1205 19:03:48.565865 1007620 certs.go:385] copying /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.key.743a5472 -> /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.key
	I1205 19:03:48.565909 1007620 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.key
	I1205 19:03:48.565928 1007620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.crt with IP's: []
	I1205 19:03:48.668882 1007620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.crt ...
	I1205 19:03:48.668913 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.crt: {Name:mkc43cd53e1b9b593b1e3cd6970ec1fcb81b5e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.669896 1007620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.key ...
	I1205 19:03:48.669926 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.key: {Name:mkdc7be51d8e9542fc4cd9c2a89e17e3aedb0f0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:48.670179 1007620 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 19:03:48.670224 1007620 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem (1082 bytes)
	I1205 19:03:48.670258 1007620 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:03:48.670288 1007620 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem (1675 bytes)
	I1205 19:03:48.670925 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:03:48.693755 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:03:48.714277 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:03:48.734277 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:03:48.754422 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 19:03:48.774461 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 19:03:48.794150 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:03:48.813707 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:03:48.833461 1007620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:03:48.853347 1007620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:03:48.868034 1007620 ssh_runner.go:195] Run: openssl version
	I1205 19:03:48.872761 1007620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:03:48.880488 1007620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:03:48.883433 1007620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:03:48.883471 1007620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:03:48.889301 1007620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:03:48.897023 1007620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:03:48.899730 1007620 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:03:48.899779 1007620 kubeadm.go:392] StartCluster: {Name:addons-792804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-792804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:03:48.899863 1007620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:03:48.899899 1007620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:03:48.932541 1007620 cri.go:89] found id: ""
	I1205 19:03:48.932592 1007620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:03:48.940251 1007620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:03:48.947685 1007620 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1205 19:03:48.947731 1007620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:03:48.954924 1007620 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:03:48.954943 1007620 kubeadm.go:157] found existing configuration files:
	
	I1205 19:03:48.954983 1007620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 19:03:48.962093 1007620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 19:03:48.962144 1007620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 19:03:48.968936 1007620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 19:03:48.976251 1007620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 19:03:48.976292 1007620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 19:03:48.983377 1007620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 19:03:48.990588 1007620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 19:03:48.990638 1007620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 19:03:48.997528 1007620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 19:03:49.004655 1007620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 19:03:49.004704 1007620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 19:03:49.011731 1007620 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 19:03:49.064470 1007620 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1205 19:03:49.116037 1007620 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:03:58.187135 1007620 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 19:03:58.187222 1007620 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 19:03:58.187341 1007620 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1205 19:03:58.187421 1007620 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1205 19:03:58.187463 1007620 kubeadm.go:310] OS: Linux
	I1205 19:03:58.187543 1007620 kubeadm.go:310] CGROUPS_CPU: enabled
	I1205 19:03:58.187611 1007620 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1205 19:03:58.187676 1007620 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1205 19:03:58.187720 1007620 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1205 19:03:58.187774 1007620 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1205 19:03:58.187816 1007620 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1205 19:03:58.187858 1007620 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1205 19:03:58.187933 1007620 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1205 19:03:58.187974 1007620 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1205 19:03:58.188057 1007620 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:03:58.188163 1007620 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:03:58.188276 1007620 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 19:03:58.188368 1007620 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:03:58.189898 1007620 out.go:235]   - Generating certificates and keys ...
	I1205 19:03:58.189976 1007620 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 19:03:58.190057 1007620 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 19:03:58.190135 1007620 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:03:58.190192 1007620 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:03:58.190275 1007620 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:03:58.190361 1007620 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 19:03:58.190439 1007620 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 19:03:58.190593 1007620 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-792804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:03:58.190675 1007620 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 19:03:58.190845 1007620 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-792804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:03:58.190947 1007620 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:03:58.191004 1007620 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:03:58.191042 1007620 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 19:03:58.191090 1007620 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:03:58.191153 1007620 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:03:58.191242 1007620 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 19:03:58.191318 1007620 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:03:58.191408 1007620 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:03:58.191490 1007620 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:03:58.191597 1007620 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:03:58.191658 1007620 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:03:58.192914 1007620 out.go:235]   - Booting up control plane ...
	I1205 19:03:58.193014 1007620 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:03:58.193130 1007620 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:03:58.193220 1007620 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:03:58.193363 1007620 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:03:58.193470 1007620 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:03:58.193533 1007620 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 19:03:58.193672 1007620 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 19:03:58.193804 1007620 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 19:03:58.193863 1007620 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.520234ms
	I1205 19:03:58.193919 1007620 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 19:03:58.193969 1007620 kubeadm.go:310] [api-check] The API server is healthy after 4.501573723s
	I1205 19:03:58.194111 1007620 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:03:58.194259 1007620 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:03:58.194317 1007620 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:03:58.194488 1007620 kubeadm.go:310] [mark-control-plane] Marking the node addons-792804 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:03:58.194567 1007620 kubeadm.go:310] [bootstrap-token] Using token: o65g28.x4zn8lu1bzt9a8ym
	I1205 19:03:58.196441 1007620 out.go:235]   - Configuring RBAC rules ...
	I1205 19:03:58.196567 1007620 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:03:58.196666 1007620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:03:58.196798 1007620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:03:58.196955 1007620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:03:58.197065 1007620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:03:58.197136 1007620 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:03:58.197230 1007620 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:03:58.197268 1007620 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 19:03:58.197307 1007620 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 19:03:58.197313 1007620 kubeadm.go:310] 
	I1205 19:03:58.197367 1007620 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 19:03:58.197373 1007620 kubeadm.go:310] 
	I1205 19:03:58.197435 1007620 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 19:03:58.197441 1007620 kubeadm.go:310] 
	I1205 19:03:58.197483 1007620 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 19:03:58.197564 1007620 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:03:58.197638 1007620 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:03:58.197653 1007620 kubeadm.go:310] 
	I1205 19:03:58.197735 1007620 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 19:03:58.197743 1007620 kubeadm.go:310] 
	I1205 19:03:58.197807 1007620 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:03:58.197816 1007620 kubeadm.go:310] 
	I1205 19:03:58.197887 1007620 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 19:03:58.197985 1007620 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:03:58.198104 1007620 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:03:58.198117 1007620 kubeadm.go:310] 
	I1205 19:03:58.198231 1007620 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:03:58.198342 1007620 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 19:03:58.198353 1007620 kubeadm.go:310] 
	I1205 19:03:58.198442 1007620 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o65g28.x4zn8lu1bzt9a8ym \
	I1205 19:03:58.198590 1007620 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a2c5b2427d3018001d5805cd98bff895dd85ff4c852b0a0e57d4b3015d0f3ecb \
	I1205 19:03:58.198631 1007620 kubeadm.go:310] 	--control-plane 
	I1205 19:03:58.198647 1007620 kubeadm.go:310] 
	I1205 19:03:58.198747 1007620 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:03:58.198755 1007620 kubeadm.go:310] 
	I1205 19:03:58.198836 1007620 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o65g28.x4zn8lu1bzt9a8ym \
	I1205 19:03:58.198929 1007620 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a2c5b2427d3018001d5805cd98bff895dd85ff4c852b0a0e57d4b3015d0f3ecb 
	I1205 19:03:58.198953 1007620 cni.go:84] Creating CNI manager for ""
	I1205 19:03:58.198967 1007620 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:03:58.200258 1007620 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 19:03:58.201408 1007620 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 19:03:58.205013 1007620 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1205 19:03:58.205028 1007620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 19:03:58.221220 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 19:03:58.405175 1007620 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:03:58.405258 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:58.405260 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-792804 minikube.k8s.io/updated_at=2024_12_05T19_03_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=addons-792804 minikube.k8s.io/primary=true
	I1205 19:03:58.412387 1007620 ops.go:34] apiserver oom_adj: -16
	I1205 19:03:58.492387 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:58.993470 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:59.493329 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:59.993064 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:04:00.492953 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:04:00.993186 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:04:01.492961 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:04:01.992512 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:04:02.492887 1007620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:04:02.557796 1007620 kubeadm.go:1113] duration metric: took 4.152605202s to wait for elevateKubeSystemPrivileges
	I1205 19:04:02.557833 1007620 kubeadm.go:394] duration metric: took 13.658058625s to StartCluster
	I1205 19:04:02.557854 1007620 settings.go:142] acquiring lock: {Name:mk8cc47684b2d9b56f7c67a506188e087d04cea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:04:02.557965 1007620 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-999445/kubeconfig
	I1205 19:04:02.558353 1007620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/kubeconfig: {Name:mk9f3e1f3f15e579e42360c3cd96b3ca0e071da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:04:02.558560 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:04:02.558573 1007620 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:04:02.558641 1007620 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1205 19:04:02.558794 1007620 addons.go:69] Setting yakd=true in profile "addons-792804"
	I1205 19:04:02.558808 1007620 addons.go:69] Setting ingress=true in profile "addons-792804"
	I1205 19:04:02.558824 1007620 config.go:182] Loaded profile config "addons-792804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:04:02.558831 1007620 addons.go:69] Setting storage-provisioner=true in profile "addons-792804"
	I1205 19:04:02.558838 1007620 addons.go:234] Setting addon ingress=true in "addons-792804"
	I1205 19:04:02.558846 1007620 addons.go:234] Setting addon storage-provisioner=true in "addons-792804"
	I1205 19:04:02.558840 1007620 addons.go:69] Setting registry=true in profile "addons-792804"
	I1205 19:04:02.558861 1007620 addons.go:69] Setting cloud-spanner=true in profile "addons-792804"
	I1205 19:04:02.558869 1007620 addons.go:234] Setting addon registry=true in "addons-792804"
	I1205 19:04:02.558874 1007620 addons.go:69] Setting volcano=true in profile "addons-792804"
	I1205 19:04:02.558880 1007620 addons.go:234] Setting addon cloud-spanner=true in "addons-792804"
	I1205 19:04:02.558868 1007620 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-792804"
	I1205 19:04:02.558887 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.558895 1007620 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-792804"
	I1205 19:04:02.558900 1007620 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-792804"
	I1205 19:04:02.558917 1007620 addons.go:69] Setting ingress-dns=true in profile "addons-792804"
	I1205 19:04:02.558920 1007620 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-792804"
	I1205 19:04:02.558935 1007620 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-792804"
	I1205 19:04:02.558949 1007620 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-792804"
	I1205 19:04:02.558903 1007620 addons.go:69] Setting default-storageclass=true in profile "addons-792804"
	I1205 19:04:02.558975 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.558988 1007620 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-792804"
	I1205 19:04:02.558949 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.558826 1007620 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-792804"
	I1205 19:04:02.559138 1007620 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-792804"
	I1205 19:04:02.559170 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.558918 1007620 addons.go:69] Setting metrics-server=true in profile "addons-792804"
	I1205 19:04:02.559257 1007620 addons.go:234] Setting addon metrics-server=true in "addons-792804"
	I1205 19:04:02.559290 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.559321 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.559324 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.558908 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.559597 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.559627 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.558822 1007620 addons.go:234] Setting addon yakd=true in "addons-792804"
	I1205 19:04:02.559679 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.559739 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.559997 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.558911 1007620 addons.go:69] Setting gcp-auth=true in profile "addons-792804"
	I1205 19:04:02.558929 1007620 addons.go:234] Setting addon ingress-dns=true in "addons-792804"
	I1205 19:04:02.558929 1007620 addons.go:69] Setting inspektor-gadget=true in profile "addons-792804"
	I1205 19:04:02.558886 1007620 addons.go:234] Setting addon volcano=true in "addons-792804"
	I1205 19:04:02.558893 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.559500 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.560064 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.560115 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.559576 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.560533 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.558923 1007620 addons.go:69] Setting volumesnapshots=true in profile "addons-792804"
	I1205 19:04:02.560712 1007620 addons.go:234] Setting addon volumesnapshots=true in "addons-792804"
	I1205 19:04:02.560745 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.561168 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.565943 1007620 addons.go:234] Setting addon inspektor-gadget=true in "addons-792804"
	I1205 19:04:02.566055 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.566170 1007620 out.go:177] * Verifying Kubernetes components...
	I1205 19:04:02.566632 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.558908 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.566864 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.566904 1007620 mustload.go:65] Loading cluster: addons-792804
	I1205 19:04:02.567559 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.567882 1007620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:04:02.571016 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.594801 1007620 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-792804"
	I1205 19:04:02.594860 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.595386 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.598432 1007620 config.go:182] Loaded profile config "addons-792804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:04:02.598656 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.598898 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.631780 1007620 addons.go:234] Setting addon default-storageclass=true in "addons-792804"
	I1205 19:04:02.631843 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.632100 1007620 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1205 19:04:02.632144 1007620 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1205 19:04:02.632175 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 19:04:02.632296 1007620 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1205 19:04:02.632521 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:02.634245 1007620 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1205 19:04:02.634268 1007620 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1205 19:04:02.634349 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.634921 1007620 out.go:177]   - Using image docker.io/registry:2.8.3
	I1205 19:04:02.635410 1007620 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:04:02.635440 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1205 19:04:02.634962 1007620 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:04:02.635519 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.635982 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 19:04:02.636243 1007620 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1205 19:04:02.637373 1007620 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 19:04:02.637398 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1205 19:04:02.637399 1007620 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:04:02.637419 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:04:02.637457 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.637473 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.642795 1007620 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:04:02.642818 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 19:04:02.642878 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.643957 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 19:04:02.649491 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 19:04:02.650630 1007620 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1205 19:04:02.651642 1007620 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1205 19:04:02.651672 1007620 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1205 19:04:02.651745 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.655537 1007620 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1205 19:04:02.657475 1007620 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1205 19:04:02.657481 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 19:04:02.657627 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 19:04:02.658751 1007620 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 19:04:02.658772 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1205 19:04:02.658845 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.659123 1007620 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 19:04:02.659140 1007620 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 19:04:02.659222 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.659437 1007620 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 19:04:02.659463 1007620 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 19:04:02.659518 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.661392 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 19:04:02.663104 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W1205 19:04:02.663907 1007620 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1205 19:04:02.665173 1007620 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 19:04:02.666263 1007620 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 19:04:02.666282 1007620 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 19:04:02.666360 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.683325 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:02.688053 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.691878 1007620 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1205 19:04:02.694507 1007620 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 19:04:02.701951 1007620 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 19:04:02.704418 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.704429 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.708379 1007620 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:04:02.708401 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1205 19:04:02.708464 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.711488 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.722361 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:04:02.723912 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.725384 1007620 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:04:02.725401 1007620 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:04:02.725458 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.731260 1007620 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 19:04:02.732524 1007620 out.go:177]   - Using image docker.io/busybox:stable
	I1205 19:04:02.733905 1007620 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:04:02.733924 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 19:04:02.733981 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.735608 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.736265 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.736689 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.736765 1007620 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1205 19:04:02.737842 1007620 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1205 19:04:02.737859 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 19:04:02.737896 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:02.746320 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.748285 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.758202 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.758939 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.759016 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:02.760183 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	W1205 19:04:02.778435 1007620 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 19:04:02.778467 1007620 retry.go:31] will retry after 165.167607ms: ssh: handshake failed: EOF
	I1205 19:04:02.799206 1007620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:04:02.996129 1007620 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1205 19:04:02.996219 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1205 19:04:03.175279 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:04:03.181625 1007620 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1205 19:04:03.181655 1007620 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1205 19:04:03.191782 1007620 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 19:04:03.191811 1007620 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 19:04:03.192159 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:04:03.192745 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:04:03.275799 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1205 19:04:03.282415 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:04:03.283831 1007620 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 19:04:03.283907 1007620 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 19:04:03.378816 1007620 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 19:04:03.378863 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 19:04:03.383131 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:04:03.477886 1007620 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 19:04:03.477969 1007620 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 19:04:03.577442 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 19:04:03.580703 1007620 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1205 19:04:03.580744 1007620 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1205 19:04:03.583828 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 19:04:03.585260 1007620 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 19:04:03.585282 1007620 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 19:04:03.776297 1007620 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:04:03.776331 1007620 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 19:04:03.776421 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:04:03.781382 1007620 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 19:04:03.781406 1007620 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 19:04:03.788565 1007620 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 19:04:03.788635 1007620 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 19:04:03.792573 1007620 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 19:04:03.792624 1007620 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 19:04:03.875570 1007620 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1205 19:04:03.875660 1007620 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1205 19:04:04.275776 1007620 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 19:04:04.275924 1007620 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 19:04:04.284151 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:04:04.288526 1007620 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:04:04.288601 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 19:04:04.298677 1007620 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.576283327s)
	I1205 19:04:04.298756 1007620 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1205 19:04:04.298766 1007620 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.499512993s)
	I1205 19:04:04.299762 1007620 node_ready.go:35] waiting up to 6m0s for node "addons-792804" to be "Ready" ...
	I1205 19:04:04.387253 1007620 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1205 19:04:04.387285 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1205 19:04:04.396311 1007620 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 19:04:04.396365 1007620 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 19:04:04.476743 1007620 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 19:04:04.476853 1007620 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 19:04:04.482158 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:04:04.576779 1007620 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 19:04:04.576829 1007620 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 19:04:04.679807 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.504400619s)
	I1205 19:04:04.774610 1007620 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:04:04.774640 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 19:04:04.775377 1007620 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 19:04:04.775428 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 19:04:04.789082 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1205 19:04:04.975404 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:04:05.083956 1007620 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 19:04:05.084046 1007620 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 19:04:05.095754 1007620 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-792804" context rescaled to 1 replicas
	I1205 19:04:05.490388 1007620 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 19:04:05.490515 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 19:04:06.084359 1007620 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 19:04:06.084451 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 19:04:06.294036 1007620 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:04:06.294085 1007620 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 19:04:06.478896 1007620 node_ready.go:53] node "addons-792804" has status "Ready":"False"
	I1205 19:04:06.491602 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:04:06.497526 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.305327687s)
	I1205 19:04:07.285429 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.092642682s)
	I1205 19:04:07.891666 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.615765092s)
	I1205 19:04:08.881143 1007620 node_ready.go:53] node "addons-792804" has status "Ready":"False"
	I1205 19:04:08.982382 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.69987011s)
	I1205 19:04:08.982855 1007620 addons.go:475] Verifying addon ingress=true in "addons-792804"
	I1205 19:04:08.982544 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.599310654s)
	I1205 19:04:08.982576 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.405103748s)
	I1205 19:04:08.982601 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.398751526s)
	I1205 19:04:08.982668 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.20618809s)
	I1205 19:04:08.982732 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.698495881s)
	I1205 19:04:08.983668 1007620 addons.go:475] Verifying addon metrics-server=true in "addons-792804"
	I1205 19:04:08.982765 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.500510747s)
	I1205 19:04:08.983709 1007620 addons.go:475] Verifying addon registry=true in "addons-792804"
	I1205 19:04:08.982803 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.19365918s)
	I1205 19:04:08.984626 1007620 out.go:177] * Verifying ingress addon...
	I1205 19:04:08.986397 1007620 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-792804 service yakd-dashboard -n yakd-dashboard
	
	I1205 19:04:08.986403 1007620 out.go:177] * Verifying registry addon...
	I1205 19:04:08.987606 1007620 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 19:04:08.988613 1007620 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 19:04:08.998611 1007620 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:04:08.998636 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:08.998999 1007620 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 19:04:08.999020 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:09.492780 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:09.493281 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:09.888140 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.912612293s)
	W1205 19:04:09.888188 1007620 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:04:09.888227 1007620 retry.go:31] will retry after 256.21842ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:04:09.892416 1007620 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 19:04:09.892499 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:09.913937 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:09.994214 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:09.994847 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:10.145266 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:04:10.194328 1007620 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 19:04:10.279820 1007620 addons.go:234] Setting addon gcp-auth=true in "addons-792804"
	I1205 19:04:10.279893 1007620 host.go:66] Checking if "addons-792804" exists ...
	I1205 19:04:10.280390 1007620 cli_runner.go:164] Run: docker container inspect addons-792804 --format={{.State.Status}}
	I1205 19:04:10.308090 1007620 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 19:04:10.308151 1007620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792804
	I1205 19:04:10.325462 1007620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/addons-792804/id_rsa Username:docker}
	I1205 19:04:10.493305 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:10.493654 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:10.707948 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.216216245s)
	I1205 19:04:10.708052 1007620 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-792804"
	I1205 19:04:10.709575 1007620 out.go:177] * Verifying csi-hostpath-driver addon...
	I1205 19:04:10.711708 1007620 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 19:04:10.779732 1007620 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:04:10.779765 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:10.991444 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:10.991765 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:11.215584 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:11.302999 1007620 node_ready.go:53] node "addons-792804" has status "Ready":"False"
	I1205 19:04:11.491322 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:11.491401 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:11.715275 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:11.991815 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:11.992097 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:12.215221 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:12.491541 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:12.491927 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:12.715605 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:12.991359 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:12.991519 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:13.143033 1007620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.997722111s)
	I1205 19:04:13.143122 1007620 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.834997313s)
	I1205 19:04:13.144846 1007620 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1205 19:04:13.146155 1007620 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 19:04:13.147240 1007620 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 19:04:13.147254 1007620 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 19:04:13.163751 1007620 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 19:04:13.163769 1007620 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 19:04:13.179357 1007620 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:04:13.179379 1007620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1205 19:04:13.194849 1007620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:04:13.215772 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:13.303498 1007620 node_ready.go:53] node "addons-792804" has status "Ready":"False"
	I1205 19:04:13.492044 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:13.492655 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:13.498057 1007620 addons.go:475] Verifying addon gcp-auth=true in "addons-792804"
	I1205 19:04:13.499511 1007620 out.go:177] * Verifying gcp-auth addon...
	I1205 19:04:13.501712 1007620 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 19:04:13.504094 1007620 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 19:04:13.504113 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:13.715563 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:13.991679 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:13.992130 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:14.004803 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:14.214554 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:14.491427 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:14.491459 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:14.504078 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:14.715151 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:14.991944 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:14.991994 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:15.004709 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:15.215501 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:15.491659 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:15.491691 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:15.504549 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:15.715834 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:15.802857 1007620 node_ready.go:53] node "addons-792804" has status "Ready":"False"
	I1205 19:04:15.991342 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:15.991705 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:16.004108 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:16.215200 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:16.492097 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:16.492411 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:16.592038 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:16.715013 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:16.991782 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:16.991976 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:17.004872 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:17.215832 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:17.491541 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:17.491657 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:17.504368 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:17.715534 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:17.991626 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:17.991932 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:18.004620 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:18.215966 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:18.303657 1007620 node_ready.go:53] node "addons-792804" has status "Ready":"False"
	I1205 19:04:18.491895 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:18.492365 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:18.505759 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:18.716161 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:18.991754 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:18.992379 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:19.005612 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:19.215413 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:19.491343 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:19.491366 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:19.504168 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:19.715280 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:19.991787 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:19.991983 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:20.004853 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:20.214773 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:20.491335 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:20.491514 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:20.504127 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:20.715181 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:20.803342 1007620 node_ready.go:53] node "addons-792804" has status "Ready":"False"
	I1205 19:04:20.991904 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:20.991958 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:21.004908 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:21.215050 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:21.491522 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:21.492144 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:21.504571 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:21.723895 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:21.802568 1007620 node_ready.go:49] node "addons-792804" has status "Ready":"True"
	I1205 19:04:21.802595 1007620 node_ready.go:38] duration metric: took 17.502805167s for node "addons-792804" to be "Ready" ...
	I1205 19:04:21.802605 1007620 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:04:21.811716 1007620 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace to be "Ready" ...
	I1205 19:04:22.004583 1007620 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:04:22.004689 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:22.005029 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:22.083234 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:22.216678 1007620 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:04:22.216706 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:22.492971 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:22.493845 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:22.592173 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:22.716562 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:22.992138 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:22.992560 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:23.005664 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:23.216382 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:23.491934 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:23.492248 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:23.505499 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:23.716877 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:23.817245 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:23.991822 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:23.992097 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:24.005140 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:24.215969 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:24.492109 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:24.492351 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:24.504903 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:24.716159 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:24.991911 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:24.992207 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:25.004780 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:25.217615 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:25.491639 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:25.492047 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:25.504990 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:25.716809 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:25.817582 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:25.992458 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:25.992996 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:26.004646 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:26.216928 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:26.491563 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:26.491616 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:26.504769 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:26.716663 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:26.991886 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:26.991888 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:27.004607 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:27.216389 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:27.491803 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:27.492020 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:27.505150 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:27.716775 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:27.818546 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:27.992027 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:27.992413 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:28.005136 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:28.215906 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:28.491573 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:28.491697 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:28.504300 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:28.716056 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:28.992101 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:28.992578 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:29.004819 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:29.216463 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:29.492503 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:29.493298 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:29.505172 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:29.777983 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:29.875946 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:29.992463 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:29.993335 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:30.005541 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:30.217111 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:30.491692 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:30.491918 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:30.504067 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:30.717485 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:30.992093 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:30.992213 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:31.004792 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:31.216980 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:31.494452 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:31.494783 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:31.504443 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:31.716195 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:31.991528 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:31.991568 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:32.004421 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:32.216155 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:32.317048 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:32.491718 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:32.491873 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:32.504429 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:32.716323 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:32.992977 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:32.993868 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:33.005024 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:33.279812 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:33.491992 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:33.492875 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:33.504662 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:33.775573 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:33.994796 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:33.995866 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:34.004780 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:34.217395 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:34.317590 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:34.492059 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:34.492457 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:34.505553 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:34.716745 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:34.992823 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:34.992839 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:35.005268 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:35.216948 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:35.492359 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:35.492481 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:35.505918 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:35.717333 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:35.992557 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:35.992773 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:36.004907 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:36.217589 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:36.317973 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:36.491989 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:36.492559 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:36.504991 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:36.716054 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:36.992455 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:36.993047 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:37.092753 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:37.216859 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:37.492236 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:37.492579 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:37.504816 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:37.717599 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:37.992243 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:37.992370 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:38.004376 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:38.216651 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:38.318631 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:38.492413 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:38.492755 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:38.505654 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:38.716618 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:38.992189 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:38.992844 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:39.003677 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:39.216452 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:39.491600 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:39.492088 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:39.504960 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:39.716509 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:39.993161 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:39.994377 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:40.080553 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:40.217477 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:40.491856 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:40.492517 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:40.505332 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:40.716340 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:40.817500 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:40.992446 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:40.992706 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:41.004485 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:41.216322 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:41.492315 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:41.492414 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:41.505001 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:41.715955 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:41.992201 1007620 kapi.go:107] duration metric: took 33.003584409s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 19:04:41.992615 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:42.005049 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:42.216111 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:42.493251 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:42.504481 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:42.717736 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:42.817969 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:42.992328 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:43.005562 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:43.217197 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:43.492763 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:43.505865 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:43.716814 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:43.992644 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:44.005452 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:44.216445 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:44.491853 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:44.504603 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:44.716609 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:44.818156 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:44.991926 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:45.004743 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:45.217105 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:45.491980 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:45.504988 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:45.716943 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:45.993289 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:46.004807 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:46.217342 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:46.492297 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:46.591952 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:46.716545 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:46.992952 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:47.005030 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:47.216218 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:47.317682 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:47.492457 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:47.505374 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:47.716442 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:47.992033 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:48.004636 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:48.216259 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:48.492238 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:48.504962 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:48.716018 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:48.991565 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:49.092054 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:49.215701 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:49.317729 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:49.491805 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:49.504455 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:49.716126 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:49.992669 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:50.004646 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:50.217628 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:50.492655 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:50.579495 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:50.784766 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:50.993048 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:51.077699 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:51.289583 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:51.380887 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:51.493492 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:51.577722 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:51.779100 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:51.992627 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:52.075341 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:52.219395 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:52.492374 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:52.505369 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:52.717215 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:52.993271 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:53.004964 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:53.215851 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:53.492322 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:53.504847 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:53.717435 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:53.817243 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:53.991958 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:54.005540 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:54.217459 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:54.492006 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:54.504952 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:54.716859 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:54.991985 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:55.006786 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:55.217317 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:55.491899 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:55.504812 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:55.717657 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:55.823182 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:55.993537 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:56.004796 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:56.221694 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:56.492247 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:56.505085 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:56.717328 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:56.992420 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:57.005279 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:57.216280 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:57.492045 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:57.504913 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:57.716386 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:57.992228 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:58.005178 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:58.216512 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:58.318408 1007620 pod_ready.go:103] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"False"
	I1205 19:04:58.492974 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:58.504597 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:58.716868 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:58.993027 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:59.020298 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:59.216160 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:59.493395 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:59.504947 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:59.716041 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:59.992334 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:00.076931 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:00.279003 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:00.379950 1007620 pod_ready.go:93] pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:00.379982 1007620 pod_ready.go:82] duration metric: took 38.568236705s for pod "amd-gpu-device-plugin-rkfpl" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.379997 1007620 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7qzsp" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.385793 1007620 pod_ready.go:93] pod "coredns-7c65d6cfc9-7qzsp" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:00.385865 1007620 pod_ready.go:82] duration metric: took 5.858127ms for pod "coredns-7c65d6cfc9-7qzsp" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.385898 1007620 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.397206 1007620 pod_ready.go:93] pod "etcd-addons-792804" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:00.397234 1007620 pod_ready.go:82] duration metric: took 11.323042ms for pod "etcd-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.397252 1007620 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.402610 1007620 pod_ready.go:93] pod "kube-apiserver-addons-792804" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:00.402632 1007620 pod_ready.go:82] duration metric: took 5.37105ms for pod "kube-apiserver-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.402644 1007620 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.476255 1007620 pod_ready.go:93] pod "kube-controller-manager-addons-792804" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:00.476337 1007620 pod_ready.go:82] duration metric: took 73.670892ms for pod "kube-controller-manager-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.476371 1007620 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t8lq4" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.492579 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:00.577170 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:00.715661 1007620 pod_ready.go:93] pod "kube-proxy-t8lq4" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:00.715684 1007620 pod_ready.go:82] duration metric: took 239.302942ms for pod "kube-proxy-t8lq4" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.715694 1007620 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:00.716182 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:00.992224 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:01.005747 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:01.115526 1007620 pod_ready.go:93] pod "kube-scheduler-addons-792804" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:01.115553 1007620 pod_ready.go:82] duration metric: took 399.852309ms for pod "kube-scheduler-addons-792804" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:01.115568 1007620 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:01.217287 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:01.492375 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:01.505446 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:01.716353 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:01.991828 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:02.004695 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:02.216469 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:02.491962 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:02.591995 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:02.717100 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:02.992167 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:03.005049 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:03.121460 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:03.216151 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:03.492731 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:03.505321 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:03.715898 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:03.992285 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:04.004820 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:04.216904 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:04.492154 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:04.505078 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:04.716867 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:04.991994 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:05.004753 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:05:05.121862 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:05.217405 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:05.494159 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:05.578328 1007620 kapi.go:107] duration metric: took 52.076611255s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 19:05:05.580588 1007620 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-792804 cluster.
	I1205 19:05:05.582036 1007620 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 19:05:05.583247 1007620 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 19:05:05.779495 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:05.993870 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:06.276769 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:06.494089 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:06.779267 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:06.992617 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:07.122046 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:07.217517 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:07.492899 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:07.716234 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:07.992083 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:08.217205 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:08.493003 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:08.717144 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:08.992118 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:09.122125 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:09.216970 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:09.492408 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:09.717233 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:09.992705 1007620 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:10.218531 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:10.492735 1007620 kapi.go:107] duration metric: took 1m1.505124576s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 19:05:10.716710 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:11.215419 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:11.679206 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:11.779478 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:12.217481 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:12.716044 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:13.216362 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:13.716958 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:14.122038 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:14.216682 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:14.716235 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:15.216503 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:15.716458 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:16.153259 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:16.220642 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:16.715985 1007620 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:05:17.216596 1007620 kapi.go:107] duration metric: took 1m6.504888403s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 19:05:17.217976 1007620 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, inspektor-gadget, cloud-spanner, amd-gpu-device-plugin, metrics-server, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1205 19:05:17.219063 1007620 addons.go:510] duration metric: took 1m14.660435152s for enable addons: enabled=[nvidia-device-plugin ingress-dns storage-provisioner inspektor-gadget cloud-spanner amd-gpu-device-plugin metrics-server yakd default-storageclass storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1205 19:05:18.621777 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:20.621841 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:23.122245 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:25.620805 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:27.622238 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:30.121602 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:32.121693 1007620 pod_ready.go:103] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"False"
	I1205 19:05:32.620633 1007620 pod_ready.go:93] pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:32.620660 1007620 pod_ready.go:82] duration metric: took 31.505082921s for pod "metrics-server-84c5f94fbc-xvwfg" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:32.620674 1007620 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-plx8r" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:32.624989 1007620 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-plx8r" in "kube-system" namespace has status "Ready":"True"
	I1205 19:05:32.625009 1007620 pod_ready.go:82] duration metric: took 4.326672ms for pod "nvidia-device-plugin-daemonset-plx8r" in "kube-system" namespace to be "Ready" ...
	I1205 19:05:32.625025 1007620 pod_ready.go:39] duration metric: took 1m10.822408846s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:05:32.625043 1007620 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:05:32.625074 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 19:05:32.625122 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 19:05:32.659425 1007620 cri.go:89] found id: "5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d"
	I1205 19:05:32.659448 1007620 cri.go:89] found id: ""
	I1205 19:05:32.659460 1007620 logs.go:282] 1 containers: [5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d]
	I1205 19:05:32.659508 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:32.662879 1007620 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 19:05:32.662925 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 19:05:32.695345 1007620 cri.go:89] found id: "c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4"
	I1205 19:05:32.695376 1007620 cri.go:89] found id: ""
	I1205 19:05:32.695386 1007620 logs.go:282] 1 containers: [c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4]
	I1205 19:05:32.695431 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:32.698636 1007620 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 19:05:32.698687 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 19:05:32.732474 1007620 cri.go:89] found id: "90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537"
	I1205 19:05:32.732500 1007620 cri.go:89] found id: ""
	I1205 19:05:32.732510 1007620 logs.go:282] 1 containers: [90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537]
	I1205 19:05:32.732560 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:32.735690 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 19:05:32.735749 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 19:05:32.767444 1007620 cri.go:89] found id: "c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248"
	I1205 19:05:32.767461 1007620 cri.go:89] found id: ""
	I1205 19:05:32.767468 1007620 logs.go:282] 1 containers: [c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248]
	I1205 19:05:32.767509 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:32.770588 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 19:05:32.770638 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 19:05:32.806516 1007620 cri.go:89] found id: "9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035"
	I1205 19:05:32.806537 1007620 cri.go:89] found id: ""
	I1205 19:05:32.806547 1007620 logs.go:282] 1 containers: [9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035]
	I1205 19:05:32.806605 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:32.810090 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 19:05:32.810168 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 19:05:32.844908 1007620 cri.go:89] found id: "a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121"
	I1205 19:05:32.844929 1007620 cri.go:89] found id: ""
	I1205 19:05:32.844936 1007620 logs.go:282] 1 containers: [a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121]
	I1205 19:05:32.844991 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:32.848282 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 19:05:32.848333 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 19:05:32.882335 1007620 cri.go:89] found id: "ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b"
	I1205 19:05:32.882366 1007620 cri.go:89] found id: ""
	I1205 19:05:32.882376 1007620 logs.go:282] 1 containers: [ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b]
	I1205 19:05:32.882427 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:32.885673 1007620 logs.go:123] Gathering logs for container status ...
	I1205 19:05:32.885700 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 19:05:32.925637 1007620 logs.go:123] Gathering logs for kubelet ...
	I1205 19:05:32.925668 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 19:05:33.008046 1007620 logs.go:123] Gathering logs for dmesg ...
	I1205 19:05:33.008086 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 19:05:33.022973 1007620 logs.go:123] Gathering logs for describe nodes ...
	I1205 19:05:33.023006 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 19:05:33.128124 1007620 logs.go:123] Gathering logs for etcd [c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4] ...
	I1205 19:05:33.128156 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4"
	I1205 19:05:33.184782 1007620 logs.go:123] Gathering logs for coredns [90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537] ...
	I1205 19:05:33.184825 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537"
	I1205 19:05:33.221070 1007620 logs.go:123] Gathering logs for kube-proxy [9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035] ...
	I1205 19:05:33.221099 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035"
	I1205 19:05:33.255020 1007620 logs.go:123] Gathering logs for kindnet [ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b] ...
	I1205 19:05:33.255047 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b"
	I1205 19:05:33.288620 1007620 logs.go:123] Gathering logs for kube-apiserver [5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d] ...
	I1205 19:05:33.288655 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d"
	I1205 19:05:33.332443 1007620 logs.go:123] Gathering logs for kube-scheduler [c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248] ...
	I1205 19:05:33.332484 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248"
	I1205 19:05:33.374559 1007620 logs.go:123] Gathering logs for kube-controller-manager [a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121] ...
	I1205 19:05:33.374606 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121"
	I1205 19:05:33.434201 1007620 logs.go:123] Gathering logs for CRI-O ...
	I1205 19:05:33.434236 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 19:05:36.008406 1007620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:05:36.022681 1007620 api_server.go:72] duration metric: took 1m33.464078966s to wait for apiserver process to appear ...
	I1205 19:05:36.022716 1007620 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:05:36.022764 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 19:05:36.022816 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 19:05:36.055742 1007620 cri.go:89] found id: "5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d"
	I1205 19:05:36.055767 1007620 cri.go:89] found id: ""
	I1205 19:05:36.055775 1007620 logs.go:282] 1 containers: [5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d]
	I1205 19:05:36.055823 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:36.058949 1007620 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 19:05:36.059020 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 19:05:36.091533 1007620 cri.go:89] found id: "c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4"
	I1205 19:05:36.091554 1007620 cri.go:89] found id: ""
	I1205 19:05:36.091563 1007620 logs.go:282] 1 containers: [c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4]
	I1205 19:05:36.091609 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:36.094777 1007620 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 19:05:36.094841 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 19:05:36.127303 1007620 cri.go:89] found id: "90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537"
	I1205 19:05:36.127327 1007620 cri.go:89] found id: ""
	I1205 19:05:36.127337 1007620 logs.go:282] 1 containers: [90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537]
	I1205 19:05:36.127392 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:36.130430 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 19:05:36.130491 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 19:05:36.162804 1007620 cri.go:89] found id: "c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248"
	I1205 19:05:36.162826 1007620 cri.go:89] found id: ""
	I1205 19:05:36.162834 1007620 logs.go:282] 1 containers: [c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248]
	I1205 19:05:36.162888 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:36.166019 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 19:05:36.166071 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 19:05:36.199412 1007620 cri.go:89] found id: "9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035"
	I1205 19:05:36.199435 1007620 cri.go:89] found id: ""
	I1205 19:05:36.199444 1007620 logs.go:282] 1 containers: [9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035]
	I1205 19:05:36.199496 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:36.202572 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 19:05:36.202627 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 19:05:36.235120 1007620 cri.go:89] found id: "a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121"
	I1205 19:05:36.235138 1007620 cri.go:89] found id: ""
	I1205 19:05:36.235145 1007620 logs.go:282] 1 containers: [a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121]
	I1205 19:05:36.235192 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:36.238488 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 19:05:36.238534 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 19:05:36.269611 1007620 cri.go:89] found id: "ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b"
	I1205 19:05:36.269631 1007620 cri.go:89] found id: ""
	I1205 19:05:36.269638 1007620 logs.go:282] 1 containers: [ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b]
	I1205 19:05:36.269675 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:36.272689 1007620 logs.go:123] Gathering logs for kubelet ...
	I1205 19:05:36.272710 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 19:05:36.350990 1007620 logs.go:123] Gathering logs for dmesg ...
	I1205 19:05:36.351025 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 19:05:36.364911 1007620 logs.go:123] Gathering logs for etcd [c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4] ...
	I1205 19:05:36.364943 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4"
	I1205 19:05:36.417855 1007620 logs.go:123] Gathering logs for coredns [90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537] ...
	I1205 19:05:36.417886 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537"
	I1205 19:05:36.455229 1007620 logs.go:123] Gathering logs for kube-scheduler [c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248] ...
	I1205 19:05:36.455255 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248"
	I1205 19:05:36.493301 1007620 logs.go:123] Gathering logs for kube-proxy [9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035] ...
	I1205 19:05:36.493344 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035"
	I1205 19:05:36.526017 1007620 logs.go:123] Gathering logs for kube-controller-manager [a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121] ...
	I1205 19:05:36.526045 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121"
	I1205 19:05:36.580188 1007620 logs.go:123] Gathering logs for kindnet [ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b] ...
	I1205 19:05:36.580216 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b"
	I1205 19:05:36.613958 1007620 logs.go:123] Gathering logs for describe nodes ...
	I1205 19:05:36.613988 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 19:05:36.710284 1007620 logs.go:123] Gathering logs for kube-apiserver [5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d] ...
	I1205 19:05:36.710313 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d"
	I1205 19:05:36.754194 1007620 logs.go:123] Gathering logs for CRI-O ...
	I1205 19:05:36.754225 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 19:05:36.831398 1007620 logs.go:123] Gathering logs for container status ...
	I1205 19:05:36.831428 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 19:05:39.374485 1007620 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:05:39.378078 1007620 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1205 19:05:39.378922 1007620 api_server.go:141] control plane version: v1.31.2
	I1205 19:05:39.378947 1007620 api_server.go:131] duration metric: took 3.356225004s to wait for apiserver health ...
	I1205 19:05:39.378958 1007620 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:05:39.378983 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 19:05:39.379029 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 19:05:39.413132 1007620 cri.go:89] found id: "5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d"
	I1205 19:05:39.413154 1007620 cri.go:89] found id: ""
	I1205 19:05:39.413164 1007620 logs.go:282] 1 containers: [5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d]
	I1205 19:05:39.413218 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:39.416438 1007620 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 19:05:39.416502 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 19:05:39.448862 1007620 cri.go:89] found id: "c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4"
	I1205 19:05:39.448882 1007620 cri.go:89] found id: ""
	I1205 19:05:39.448891 1007620 logs.go:282] 1 containers: [c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4]
	I1205 19:05:39.448944 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:39.452089 1007620 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 19:05:39.452151 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 19:05:39.484413 1007620 cri.go:89] found id: "90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537"
	I1205 19:05:39.484431 1007620 cri.go:89] found id: ""
	I1205 19:05:39.484440 1007620 logs.go:282] 1 containers: [90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537]
	I1205 19:05:39.484497 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:39.487757 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 19:05:39.487814 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 19:05:39.519715 1007620 cri.go:89] found id: "c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248"
	I1205 19:05:39.519732 1007620 cri.go:89] found id: ""
	I1205 19:05:39.519739 1007620 logs.go:282] 1 containers: [c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248]
	I1205 19:05:39.519777 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:39.522959 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 19:05:39.523017 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 19:05:39.555557 1007620 cri.go:89] found id: "9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035"
	I1205 19:05:39.555576 1007620 cri.go:89] found id: ""
	I1205 19:05:39.555585 1007620 logs.go:282] 1 containers: [9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035]
	I1205 19:05:39.555643 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:39.558787 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 19:05:39.558833 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 19:05:39.592188 1007620 cri.go:89] found id: "a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121"
	I1205 19:05:39.592211 1007620 cri.go:89] found id: ""
	I1205 19:05:39.592222 1007620 logs.go:282] 1 containers: [a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121]
	I1205 19:05:39.592268 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:39.595722 1007620 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 19:05:39.595774 1007620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 19:05:39.628030 1007620 cri.go:89] found id: "ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b"
	I1205 19:05:39.628054 1007620 cri.go:89] found id: ""
	I1205 19:05:39.628064 1007620 logs.go:282] 1 containers: [ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b]
	I1205 19:05:39.628115 1007620 ssh_runner.go:195] Run: which crictl
	I1205 19:05:39.631581 1007620 logs.go:123] Gathering logs for describe nodes ...
	I1205 19:05:39.631600 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 19:05:39.728843 1007620 logs.go:123] Gathering logs for kube-scheduler [c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248] ...
	I1205 19:05:39.728872 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248"
	I1205 19:05:39.768774 1007620 logs.go:123] Gathering logs for kindnet [ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b] ...
	I1205 19:05:39.768803 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b"
	I1205 19:05:39.802910 1007620 logs.go:123] Gathering logs for container status ...
	I1205 19:05:39.802935 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 19:05:39.843291 1007620 logs.go:123] Gathering logs for kube-proxy [9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035] ...
	I1205 19:05:39.843351 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035"
	I1205 19:05:39.876339 1007620 logs.go:123] Gathering logs for kube-controller-manager [a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121] ...
	I1205 19:05:39.876376 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121"
	I1205 19:05:39.928825 1007620 logs.go:123] Gathering logs for CRI-O ...
	I1205 19:05:39.928853 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 19:05:40.004888 1007620 logs.go:123] Gathering logs for kubelet ...
	I1205 19:05:40.004920 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 19:05:40.090332 1007620 logs.go:123] Gathering logs for dmesg ...
	I1205 19:05:40.090365 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 19:05:40.104532 1007620 logs.go:123] Gathering logs for kube-apiserver [5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d] ...
	I1205 19:05:40.104559 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d"
	I1205 19:05:40.146894 1007620 logs.go:123] Gathering logs for etcd [c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4] ...
	I1205 19:05:40.146921 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4"
	I1205 19:05:40.199901 1007620 logs.go:123] Gathering logs for coredns [90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537] ...
	I1205 19:05:40.199931 1007620 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537"
	I1205 19:05:42.745306 1007620 system_pods.go:59] 19 kube-system pods found
	I1205 19:05:42.745351 1007620 system_pods.go:61] "amd-gpu-device-plugin-rkfpl" [84620fa8-2414-4aee-997e-77166e219e34] Running
	I1205 19:05:42.745363 1007620 system_pods.go:61] "coredns-7c65d6cfc9-7qzsp" [91584be4-8041-4112-9344-52666220752c] Running
	I1205 19:05:42.745370 1007620 system_pods.go:61] "csi-hostpath-attacher-0" [3c72118e-d60d-42f2-98ed-f4e52f0e0a81] Running
	I1205 19:05:42.745375 1007620 system_pods.go:61] "csi-hostpath-resizer-0" [64a6c0c0-a813-485f-9ef2-78f9abbe9238] Running
	I1205 19:05:42.745379 1007620 system_pods.go:61] "csi-hostpathplugin-5cqk7" [131a62a4-57eb-407f-8e80-a5c3d51538e4] Running
	I1205 19:05:42.745383 1007620 system_pods.go:61] "etcd-addons-792804" [89326821-882f-491c-b92e-b4c3600ac90d] Running
	I1205 19:05:42.745387 1007620 system_pods.go:61] "kindnet-pkvzp" [263300ed-730f-4582-b989-92eaf98b155c] Running
	I1205 19:05:42.745391 1007620 system_pods.go:61] "kube-apiserver-addons-792804" [c60d5060-a65c-4ad3-94e1-d21d0377635a] Running
	I1205 19:05:42.745398 1007620 system_pods.go:61] "kube-controller-manager-addons-792804" [a90cee14-6332-4eda-8ab1-70431bc0b27a] Running
	I1205 19:05:42.745405 1007620 system_pods.go:61] "kube-ingress-dns-minikube" [18c6b37a-2a4d-40d1-9317-1cb68ea321db] Running
	I1205 19:05:42.745412 1007620 system_pods.go:61] "kube-proxy-t8lq4" [41249f05-a6bb-4e11-a772-c813f49cce31] Running
	I1205 19:05:42.745416 1007620 system_pods.go:61] "kube-scheduler-addons-792804" [bf5949b2-510c-4c13-bd24-dfa68be6bab2] Running
	I1205 19:05:42.745419 1007620 system_pods.go:61] "metrics-server-84c5f94fbc-xvwfg" [cf42e4c4-04ee-4e87-95f2-32c2eb1a286a] Running
	I1205 19:05:42.745422 1007620 system_pods.go:61] "nvidia-device-plugin-daemonset-plx8r" [7f230e91-1177-4780-b554-91b9244f8abe] Running
	I1205 19:05:42.745428 1007620 system_pods.go:61] "registry-66c9cd494c-qh8j2" [4ed56af8-db58-447f-b533-cc510548cf01] Running
	I1205 19:05:42.745432 1007620 system_pods.go:61] "registry-proxy-5jm2x" [3066695f-7cd9-404c-b980-d75b005c5b47] Running
	I1205 19:05:42.745438 1007620 system_pods.go:61] "snapshot-controller-56fcc65765-5m8wt" [6c38ca6e-af09-4f1a-8803-1060c8ce24c7] Running
	I1205 19:05:42.745441 1007620 system_pods.go:61] "snapshot-controller-56fcc65765-xj8db" [31083923-e814-4a2d-a314-93ee4fdb3c83] Running
	I1205 19:05:42.745447 1007620 system_pods.go:61] "storage-provisioner" [d1534b1f-18b0-44a3-978d-5c2cfc6fe2df] Running
	I1205 19:05:42.745453 1007620 system_pods.go:74] duration metric: took 3.366486781s to wait for pod list to return data ...
	I1205 19:05:42.745464 1007620 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:05:42.748033 1007620 default_sa.go:45] found service account: "default"
	I1205 19:05:42.748055 1007620 default_sa.go:55] duration metric: took 2.582973ms for default service account to be created ...
	I1205 19:05:42.748065 1007620 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:05:42.758566 1007620 system_pods.go:86] 19 kube-system pods found
	I1205 19:05:42.758599 1007620 system_pods.go:89] "amd-gpu-device-plugin-rkfpl" [84620fa8-2414-4aee-997e-77166e219e34] Running
	I1205 19:05:42.758606 1007620 system_pods.go:89] "coredns-7c65d6cfc9-7qzsp" [91584be4-8041-4112-9344-52666220752c] Running
	I1205 19:05:42.758610 1007620 system_pods.go:89] "csi-hostpath-attacher-0" [3c72118e-d60d-42f2-98ed-f4e52f0e0a81] Running
	I1205 19:05:42.758614 1007620 system_pods.go:89] "csi-hostpath-resizer-0" [64a6c0c0-a813-485f-9ef2-78f9abbe9238] Running
	I1205 19:05:42.758618 1007620 system_pods.go:89] "csi-hostpathplugin-5cqk7" [131a62a4-57eb-407f-8e80-a5c3d51538e4] Running
	I1205 19:05:42.758622 1007620 system_pods.go:89] "etcd-addons-792804" [89326821-882f-491c-b92e-b4c3600ac90d] Running
	I1205 19:05:42.758626 1007620 system_pods.go:89] "kindnet-pkvzp" [263300ed-730f-4582-b989-92eaf98b155c] Running
	I1205 19:05:42.758630 1007620 system_pods.go:89] "kube-apiserver-addons-792804" [c60d5060-a65c-4ad3-94e1-d21d0377635a] Running
	I1205 19:05:42.758633 1007620 system_pods.go:89] "kube-controller-manager-addons-792804" [a90cee14-6332-4eda-8ab1-70431bc0b27a] Running
	I1205 19:05:42.758639 1007620 system_pods.go:89] "kube-ingress-dns-minikube" [18c6b37a-2a4d-40d1-9317-1cb68ea321db] Running
	I1205 19:05:42.758645 1007620 system_pods.go:89] "kube-proxy-t8lq4" [41249f05-a6bb-4e11-a772-c813f49cce31] Running
	I1205 19:05:42.758652 1007620 system_pods.go:89] "kube-scheduler-addons-792804" [bf5949b2-510c-4c13-bd24-dfa68be6bab2] Running
	I1205 19:05:42.758657 1007620 system_pods.go:89] "metrics-server-84c5f94fbc-xvwfg" [cf42e4c4-04ee-4e87-95f2-32c2eb1a286a] Running
	I1205 19:05:42.758664 1007620 system_pods.go:89] "nvidia-device-plugin-daemonset-plx8r" [7f230e91-1177-4780-b554-91b9244f8abe] Running
	I1205 19:05:42.758674 1007620 system_pods.go:89] "registry-66c9cd494c-qh8j2" [4ed56af8-db58-447f-b533-cc510548cf01] Running
	I1205 19:05:42.758680 1007620 system_pods.go:89] "registry-proxy-5jm2x" [3066695f-7cd9-404c-b980-d75b005c5b47] Running
	I1205 19:05:42.758690 1007620 system_pods.go:89] "snapshot-controller-56fcc65765-5m8wt" [6c38ca6e-af09-4f1a-8803-1060c8ce24c7] Running
	I1205 19:05:42.758700 1007620 system_pods.go:89] "snapshot-controller-56fcc65765-xj8db" [31083923-e814-4a2d-a314-93ee4fdb3c83] Running
	I1205 19:05:42.758710 1007620 system_pods.go:89] "storage-provisioner" [d1534b1f-18b0-44a3-978d-5c2cfc6fe2df] Running
	I1205 19:05:42.758722 1007620 system_pods.go:126] duration metric: took 10.651879ms to wait for k8s-apps to be running ...
	I1205 19:05:42.758733 1007620 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:05:42.758784 1007620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:05:42.770091 1007620 system_svc.go:56] duration metric: took 11.348262ms WaitForService to wait for kubelet
	I1205 19:05:42.770121 1007620 kubeadm.go:582] duration metric: took 1m40.211525134s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:05:42.770148 1007620 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:05:42.773225 1007620 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 19:05:42.773250 1007620 node_conditions.go:123] node cpu capacity is 8
	I1205 19:05:42.773264 1007620 node_conditions.go:105] duration metric: took 3.110045ms to run NodePressure ...
	I1205 19:05:42.773275 1007620 start.go:241] waiting for startup goroutines ...
	I1205 19:05:42.773282 1007620 start.go:246] waiting for cluster config update ...
	I1205 19:05:42.773298 1007620 start.go:255] writing updated cluster config ...
	I1205 19:05:42.773558 1007620 ssh_runner.go:195] Run: rm -f paused
	I1205 19:05:42.824943 1007620 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 19:05:42.827827 1007620 out.go:177] * Done! kubectl is now configured to use "addons-792804" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 19:08:43 addons-792804 crio[1029]: time="2024-12-05 19:08:43.747237283Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-5f85ff4588-tvww9 Namespace:ingress-nginx ID:528eb18553023515bf75109f924620c74a3ec5b3dd6c1ff696757684f8ee6bbe UID:50795bc5-b003-4a49-8e43-5d357178f678 NetNS:/var/run/netns/0ea88071-e947-4925-93af-36e6a67bd6a6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 05 19:08:43 addons-792804 crio[1029]: time="2024-12-05 19:08:43.747352740Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-5f85ff4588-tvww9 from CNI network \"kindnet\" (type=ptp)"
	Dec 05 19:08:43 addons-792804 crio[1029]: time="2024-12-05 19:08:43.787378254Z" level=info msg="Stopped pod sandbox: 528eb18553023515bf75109f924620c74a3ec5b3dd6c1ff696757684f8ee6bbe" id=7912a58a-07dc-41db-bb23-f5952de44adb name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 19:08:43 addons-792804 crio[1029]: time="2024-12-05 19:08:43.825572976Z" level=info msg="Removing container: c0fa3985b0c7b99fa680fde97244dceaf3916c594f0a7b70dca9f7ae3656f273" id=0380054d-4f4a-45a6-8879-d827a0d043f9 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 19:08:43 addons-792804 crio[1029]: time="2024-12-05 19:08:43.838791203Z" level=info msg="Removed container c0fa3985b0c7b99fa680fde97244dceaf3916c594f0a7b70dca9f7ae3656f273: ingress-nginx/ingress-nginx-controller-5f85ff4588-tvww9/controller" id=0380054d-4f4a-45a6-8879-d827a0d043f9 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.911545509Z" level=info msg="Removing container: b112d9ec5f02734004c3f68647c4e1a793212107848e177600a7de500a882989" id=7e60441e-aae6-491a-8109-97af470e863c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.923514668Z" level=info msg="Removed container b112d9ec5f02734004c3f68647c4e1a793212107848e177600a7de500a882989: ingress-nginx/ingress-nginx-admission-patch-dg96z/patch" id=7e60441e-aae6-491a-8109-97af470e863c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.924564760Z" level=info msg="Removing container: bc7a0617a2988062716f66ca055ff066cc75dd2ffeaa308d64e706f39e37acbc" id=8fd1cc3f-8a60-44cc-b959-3bc448fb1e53 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.938703804Z" level=info msg="Removed container bc7a0617a2988062716f66ca055ff066cc75dd2ffeaa308d64e706f39e37acbc: ingress-nginx/ingress-nginx-admission-create-lmzlv/create" id=8fd1cc3f-8a60-44cc-b959-3bc448fb1e53 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.939839635Z" level=info msg="Stopping pod sandbox: 528eb18553023515bf75109f924620c74a3ec5b3dd6c1ff696757684f8ee6bbe" id=66a2c2f1-b52c-4781-b5f7-b84cc730549e name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.939871087Z" level=info msg="Stopped pod sandbox (already stopped): 528eb18553023515bf75109f924620c74a3ec5b3dd6c1ff696757684f8ee6bbe" id=66a2c2f1-b52c-4781-b5f7-b84cc730549e name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.940086271Z" level=info msg="Removing pod sandbox: 528eb18553023515bf75109f924620c74a3ec5b3dd6c1ff696757684f8ee6bbe" id=928b4a9b-cbd6-4b45-adad-70bfcb835b49 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.945981661Z" level=info msg="Removed pod sandbox: 528eb18553023515bf75109f924620c74a3ec5b3dd6c1ff696757684f8ee6bbe" id=928b4a9b-cbd6-4b45-adad-70bfcb835b49 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.946312491Z" level=info msg="Stopping pod sandbox: f4d193f9f67ebe6c981baf41dd2b948e44bb2f3902873a12a8050b227636a0c0" id=12dde39e-4076-4fb2-8388-381d44b4c0c8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.946343699Z" level=info msg="Stopped pod sandbox (already stopped): f4d193f9f67ebe6c981baf41dd2b948e44bb2f3902873a12a8050b227636a0c0" id=12dde39e-4076-4fb2-8388-381d44b4c0c8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.946617154Z" level=info msg="Removing pod sandbox: f4d193f9f67ebe6c981baf41dd2b948e44bb2f3902873a12a8050b227636a0c0" id=3a4238e9-abb7-462c-8bba-d25c1ad9c2ad name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.952343236Z" level=info msg="Removed pod sandbox: f4d193f9f67ebe6c981baf41dd2b948e44bb2f3902873a12a8050b227636a0c0" id=3a4238e9-abb7-462c-8bba-d25c1ad9c2ad name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.952649459Z" level=info msg="Stopping pod sandbox: 110e16e24277516e2e26050af330d66ca4caf0fcdd91fc2a7bcc838b345f880e" id=957cb2d7-acef-499c-92c2-40c03686ddff name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.952673205Z" level=info msg="Stopped pod sandbox (already stopped): 110e16e24277516e2e26050af330d66ca4caf0fcdd91fc2a7bcc838b345f880e" id=957cb2d7-acef-499c-92c2-40c03686ddff name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.952918207Z" level=info msg="Removing pod sandbox: 110e16e24277516e2e26050af330d66ca4caf0fcdd91fc2a7bcc838b345f880e" id=58ff170d-5d57-4abe-96e7-e9a81aca8ae9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.957857166Z" level=info msg="Removed pod sandbox: 110e16e24277516e2e26050af330d66ca4caf0fcdd91fc2a7bcc838b345f880e" id=58ff170d-5d57-4abe-96e7-e9a81aca8ae9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.958186865Z" level=info msg="Stopping pod sandbox: 7f024a9e3d53f8d5c571a8ea895646ec76e472132f24a36d8fd0cc626656928f" id=2a8c3e37-dd15-41ad-8b43-ac097293430a name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.958210151Z" level=info msg="Stopped pod sandbox (already stopped): 7f024a9e3d53f8d5c571a8ea895646ec76e472132f24a36d8fd0cc626656928f" id=2a8c3e37-dd15-41ad-8b43-ac097293430a name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.958445504Z" level=info msg="Removing pod sandbox: 7f024a9e3d53f8d5c571a8ea895646ec76e472132f24a36d8fd0cc626656928f" id=a732dfa2-e4f2-4f1e-8038-f866a274e576 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 19:08:57 addons-792804 crio[1029]: time="2024-12-05 19:08:57.963991060Z" level=info msg="Removed pod sandbox: 7f024a9e3d53f8d5c571a8ea895646ec76e472132f24a36d8fd0cc626656928f" id=a732dfa2-e4f2-4f1e-8038-f866a274e576 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8f07e1e9dd57       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   ad45f6dec1cad       hello-world-app-55bf9c44b4-2f5lt
	3c1a849a5c9c6       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         5 minutes ago       Running             nginx                     0                   6f556d546ee83       nginx
	b9b49e0899283       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   3fba920129fb4       busybox
	1a7ee3d7b63fb       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   7 minutes ago       Running             metrics-server            0                   e8a32a365df11       metrics-server-84c5f94fbc-xvwfg
	90f4d4feb8054       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        7 minutes ago       Running             coredns                   0                   28431d5530807       coredns-7c65d6cfc9-7qzsp
	df26d92f96f4c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   3eb846c3498b1       storage-provisioner
	ba1ab1cc72f73       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16                      7 minutes ago       Running             kindnet-cni               0                   13c0fdc786cf7       kindnet-pkvzp
	9d82c3212e55e       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        7 minutes ago       Running             kube-proxy                0                   cf8473976cf44       kube-proxy-t8lq4
	c8f95dacee1a1       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        7 minutes ago       Running             kube-scheduler            0                   2a839613dd0a0       kube-scheduler-addons-792804
	a29ac131c53e9       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        7 minutes ago       Running             kube-controller-manager   0                   d33e4d58e65ba       kube-controller-manager-addons-792804
	5bc338ce05c4d       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        7 minutes ago       Running             kube-apiserver            0                   9077d86af1dd8       kube-apiserver-addons-792804
	c239002d50bbb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   863973ad960f7       etcd-addons-792804
	
	
	==> coredns [90f4d4feb8054f3efc8226194ca5640abce11524c6b1c11b44d3705c2228e537] <==
	[INFO] 10.244.0.22:57093 - 34407 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005393057s
	[INFO] 10.244.0.22:34435 - 36773 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005546455s
	[INFO] 10.244.0.22:42197 - 51771 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004789254s
	[INFO] 10.244.0.22:56371 - 62225 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004791791s
	[INFO] 10.244.0.22:54963 - 20303 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004870093s
	[INFO] 10.244.0.22:41695 - 55106 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005025682s
	[INFO] 10.244.0.22:38684 - 48186 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005025852s
	[INFO] 10.244.0.22:50904 - 37074 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005508115s
	[INFO] 10.244.0.22:57093 - 11046 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005412995s
	[INFO] 10.244.0.22:54963 - 48289 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.001303202s
	[INFO] 10.244.0.22:56371 - 11950 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.001549322s
	[INFO] 10.244.0.22:42197 - 374 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.001403059s
	[INFO] 10.244.0.22:34435 - 41504 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005655262s
	[INFO] 10.244.0.22:50904 - 14019 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000068668s
	[INFO] 10.244.0.22:42197 - 60102 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000097557s
	[INFO] 10.244.0.22:54963 - 44603 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000134897s
	[INFO] 10.244.0.22:34435 - 7368 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000076307s
	[INFO] 10.244.0.22:56371 - 45277 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000065663s
	[INFO] 10.244.0.22:57093 - 37102 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000188718s
	[INFO] 10.244.0.22:41695 - 19629 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.012748737s
	[INFO] 10.244.0.22:38684 - 48115 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.012849794s
	[INFO] 10.244.0.22:41695 - 23336 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005000771s
	[INFO] 10.244.0.22:38684 - 5830 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00488903s
	[INFO] 10.244.0.22:41695 - 54534 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070881s
	[INFO] 10.244.0.22:38684 - 3517 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000086725s
	
	
	==> describe nodes <==
	Name:               addons-792804
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-792804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=addons-792804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T19_03_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-792804
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:03:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-792804
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:11:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:09:04 +0000   Thu, 05 Dec 2024 19:03:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:09:04 +0000   Thu, 05 Dec 2024 19:03:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:09:04 +0000   Thu, 05 Dec 2024 19:03:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:09:04 +0000   Thu, 05 Dec 2024 19:04:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-792804
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 34807c83538b4fc29d7805a2fe8108b6
	  System UUID:                e34a953d-5e76-44c0-90b5-820e367e3919
	  Boot ID:                    63e29e64-0755-4812-a891-d8a092e25c6a
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     hello-world-app-55bf9c44b4-2f5lt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m15s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 coredns-7c65d6cfc9-7qzsp                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m48s
	  kube-system                 etcd-addons-792804                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m54s
	  kube-system                 kindnet-pkvzp                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m48s
	  kube-system                 kube-apiserver-addons-792804             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m54s
	  kube-system                 kube-controller-manager-addons-792804    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m54s
	  kube-system                 kube-proxy-t8lq4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	  kube-system                 kube-scheduler-addons-792804             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 metrics-server-84c5f94fbc-xvwfg          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         7m44s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 7m43s  kube-proxy       
	  Normal   Starting                 7m54s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m54s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m54s  kubelet          Node addons-792804 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m54s  kubelet          Node addons-792804 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m54s  kubelet          Node addons-792804 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m50s  node-controller  Node addons-792804 event: Registered Node addons-792804 in Controller
	  Normal   NodeReady                7m30s  kubelet          Node addons-792804 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 5 19:06] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 82 8e ea 9e b9 ac 92 f1 e2 04 03 8f 08 00
	[  +1.011858] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 82 8e ea 9e b9 ac 92 f1 e2 04 03 8f 08 00
	[  +2.015843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 82 8e ea 9e b9 ac 92 f1 e2 04 03 8f 08 00
	[  +4.127715] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000049] ll header: 00000000: 82 8e ea 9e b9 ac 92 f1 e2 04 03 8f 08 00
	[  +8.191308] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 82 8e ea 9e b9 ac 92 f1 e2 04 03 8f 08 00
	[ +16.126709] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 82 8e ea 9e b9 ac 92 f1 e2 04 03 8f 08 00
	[Dec 5 19:07] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 82 8e ea 9e b9 ac 92 f1 e2 04 03 8f 08 00
	
	
	==> etcd [c239002d50bbbe1a3ee04eab8ab171a700759ed47d400ea4d327f004e9dbe6c4] <==
	{"level":"info","ts":"2024-12-05T19:04:06.185679Z","caller":"traceutil/trace.go:171","msg":"trace[293223732] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"101.698691ms","start":"2024-12-05T19:04:06.083961Z","end":"2024-12-05T19:04:06.185660Z","steps":["trace[293223732] 'process raft request'  (duration: 100.412025ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.192472Z","caller":"traceutil/trace.go:171","msg":"trace[603909977] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"101.36148ms","start":"2024-12-05T19:04:06.091093Z","end":"2024-12-05T19:04:06.192455Z","steps":["trace[603909977] 'process raft request'  (duration: 93.358736ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.192765Z","caller":"traceutil/trace.go:171","msg":"trace[1484912059] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"100.783725ms","start":"2024-12-05T19:04:06.091966Z","end":"2024-12-05T19:04:06.192750Z","steps":["trace[1484912059] 'process raft request'  (duration: 100.078565ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:06.286496Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.435733ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-12-05T19:04:06.286579Z","caller":"traceutil/trace.go:171","msg":"trace[629605927] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:386; }","duration":"103.528821ms","start":"2024-12-05T19:04:06.183037Z","end":"2024-12-05T19:04:06.286566Z","steps":["trace[629605927] 'agreement among raft nodes before linearized reading'  (duration: 103.369868ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.397426Z","caller":"traceutil/trace.go:171","msg":"trace[496531437] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"101.697512ms","start":"2024-12-05T19:04:06.295697Z","end":"2024-12-05T19:04:06.397395Z","steps":["trace[496531437] 'process raft request'  (duration: 98.425183ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:06.796908Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.467696ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128033709431014813 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/specs/kube-system/registry\" mod_revision:0 > success:<request_put:<key:\"/registry/services/specs/kube-system/registry\" value_size:1479 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-12-05T19:04:06.797059Z","caller":"traceutil/trace.go:171","msg":"trace[1973021970] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"108.852591ms","start":"2024-12-05T19:04:06.688185Z","end":"2024-12-05T19:04:06.797037Z","steps":["trace[1973021970] 'process raft request'  (duration: 108.790074ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.797238Z","caller":"traceutil/trace.go:171","msg":"trace[270069390] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"112.57486ms","start":"2024-12-05T19:04:06.684653Z","end":"2024-12-05T19:04:06.797228Z","steps":["trace[270069390] 'compare'  (duration: 101.383089ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.797450Z","caller":"traceutil/trace.go:171","msg":"trace[1358575336] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"103.578883ms","start":"2024-12-05T19:04:06.693864Z","end":"2024-12-05T19:04:06.797442Z","steps":["trace[1358575336] 'process raft request'  (duration: 103.55187ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.797535Z","caller":"traceutil/trace.go:171","msg":"trace[1670737765] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"104.064006ms","start":"2024-12-05T19:04:06.693466Z","end":"2024-12-05T19:04:06.797530Z","steps":["trace[1670737765] 'process raft request'  (duration: 103.544568ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.797550Z","caller":"traceutil/trace.go:171","msg":"trace[124355398] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"103.877045ms","start":"2024-12-05T19:04:06.693670Z","end":"2024-12-05T19:04:06.797547Z","steps":["trace[124355398] 'process raft request'  (duration: 103.708387ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.797721Z","caller":"traceutil/trace.go:171","msg":"trace[364603730] linearizableReadLoop","detail":"{readStateIndex:433; appliedIndex:432; }","duration":"112.86766ms","start":"2024-12-05T19:04:06.684847Z","end":"2024-12-05T19:04:06.797715Z","steps":["trace[364603730] 'read index received'  (duration: 10.553926ms)","trace[364603730] 'applied index is now lower than readState.Index'  (duration: 102.312073ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T19:04:06.797764Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.908129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:04:06.798215Z","caller":"traceutil/trace.go:171","msg":"trace[1360580632] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:423; }","duration":"113.361144ms","start":"2024-12-05T19:04:06.684844Z","end":"2024-12-05T19:04:06.798205Z","steps":["trace[1360580632] 'agreement among raft nodes before linearized reading'  (duration: 112.888194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:06.876019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.138123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-12-05T19:04:06.876546Z","caller":"traceutil/trace.go:171","msg":"trace[671052996] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:428; }","duration":"181.666486ms","start":"2024-12-05T19:04:06.694861Z","end":"2024-12-05T19:04:06.876527Z","steps":["trace[671052996] 'agreement among raft nodes before linearized reading'  (duration: 181.076829ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.877176Z","caller":"traceutil/trace.go:171","msg":"trace[257066956] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"181.313478ms","start":"2024-12-05T19:04:06.695849Z","end":"2024-12-05T19:04:06.877162Z","steps":["trace[257066956] 'process raft request'  (duration: 179.856799ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.878126Z","caller":"traceutil/trace.go:171","msg":"trace[551061546] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"103.660851ms","start":"2024-12-05T19:04:06.774450Z","end":"2024-12-05T19:04:06.878111Z","steps":["trace[551061546] 'process raft request'  (duration: 101.322901ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.878392Z","caller":"traceutil/trace.go:171","msg":"trace[10943379] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"103.60684ms","start":"2024-12-05T19:04:06.774774Z","end":"2024-12-05T19:04:06.878381Z","steps":["trace[10943379] 'process raft request'  (duration: 101.034364ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:04:06.878600Z","caller":"traceutil/trace.go:171","msg":"trace[887548851] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"103.519342ms","start":"2024-12-05T19:04:06.775070Z","end":"2024-12-05T19:04:06.878589Z","steps":["trace[887548851] 'process raft request'  (duration: 100.778841ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:06.878690Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.982851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/default/cloud-spanner-emulator-dc5db94f4\" ","response":"range_response_count:1 size:2184"}
	{"level":"info","ts":"2024-12-05T19:04:06.880930Z","caller":"traceutil/trace.go:171","msg":"trace[30945718] range","detail":"{range_begin:/registry/replicasets/default/cloud-spanner-emulator-dc5db94f4; range_end:; response_count:1; response_revision:428; }","duration":"106.225649ms","start":"2024-12-05T19:04:06.774689Z","end":"2024-12-05T19:04:06.880915Z","steps":["trace[30945718] 'agreement among raft nodes before linearized reading'  (duration: 103.910227ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:06.879762Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.77615ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-t8lq4\" ","response":"range_response_count:1 size:4833"}
	{"level":"info","ts":"2024-12-05T19:04:06.881589Z","caller":"traceutil/trace.go:171","msg":"trace[117254968] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-t8lq4; range_end:; response_count:1; response_revision:428; }","duration":"106.604162ms","start":"2024-12-05T19:04:06.774971Z","end":"2024-12-05T19:04:06.881576Z","steps":["trace[117254968] 'agreement among raft nodes before linearized reading'  (duration: 104.746662ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:11:51 up 21:54,  0 users,  load average: 0.12, 12.35, 44.11
	Linux addons-792804 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [ba1ab1cc72f736655e2e66e6c9b76b66f248f6685222ce2da9fd585f16ec3d8b] <==
	I1205 19:09:41.575674       1 main.go:301] handling current node
	I1205 19:09:51.582221       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:09:51.582276       1 main.go:301] handling current node
	I1205 19:10:01.578071       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:10:01.578112       1 main.go:301] handling current node
	I1205 19:10:11.575641       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:10:11.575677       1 main.go:301] handling current node
	I1205 19:10:21.576935       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:10:21.576974       1 main.go:301] handling current node
	I1205 19:10:31.582072       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:10:31.582108       1 main.go:301] handling current node
	I1205 19:10:41.575346       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:10:41.575389       1 main.go:301] handling current node
	I1205 19:10:51.576079       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:10:51.576124       1 main.go:301] handling current node
	I1205 19:11:01.576887       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:11:01.576925       1 main.go:301] handling current node
	I1205 19:11:11.575660       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:11:11.575699       1 main.go:301] handling current node
	I1205 19:11:21.576868       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:11:21.576962       1 main.go:301] handling current node
	I1205 19:11:31.576856       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:11:31.576927       1 main.go:301] handling current node
	I1205 19:11:41.582067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:11:41.582105       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5bc338ce05c4da7e7fa76b11d8c90493b0c07ddacd33cd5a43e2080d6c77089d] <==
	 > logger="UnhandledError"
	I1205 19:05:37.220792       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1205 19:05:52.511485       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:32784: use of closed network connection
	E1205 19:05:52.690273       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:32816: use of closed network connection
	I1205 19:06:01.610691       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.167.102"}
	I1205 19:06:11.854864       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1205 19:06:12.882247       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1205 19:06:16.365839       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1205 19:06:16.535333       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.92.187"}
	I1205 19:06:39.419512       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1205 19:06:42.595738       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1205 19:06:57.261957       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:06:57.262048       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:06:57.274449       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:06:57.274584       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:06:57.275537       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:06:57.275650       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:06:57.289043       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:06:57.289090       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:06:57.389873       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:06:57.390029       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1205 19:06:58.275601       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1205 19:06:58.390720       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1205 19:06:58.401326       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1205 19:08:36.903046       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.203.174"}
	
	
	==> kube-controller-manager [a29ac131c53e9611d80141f1a3ce11b1aea27dee880641d77591d3c6d1ba5121] <==
	E1205 19:09:36.775499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:09:42.585870       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:09:42.585918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:09:55.126909       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:09:55.126953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:09:59.483616       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:09:59.483669       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:10:18.288317       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:10:18.288366       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:10:21.566417       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:10:21.566463       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:10:30.649278       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:10:30.649325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:10:44.358840       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:10:44.358894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:10:53.772732       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:10:53.772778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:11:17.295141       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:11:17.295187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:11:18.366776       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:11:18.366822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:11:27.079165       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:11:27.079224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:11:27.651880       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:11:27.651925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [9d82c3212e55e3929ee23c0d1edc36475a19ce36a5e66d3db62793e31ea45035] <==
	I1205 19:04:06.799742       1 server_linux.go:66] "Using iptables proxy"
	I1205 19:04:07.377637       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1205 19:04:07.377724       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 19:04:07.984833       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 19:04:07.984995       1 server_linux.go:169] "Using iptables Proxier"
	I1205 19:04:07.993033       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 19:04:07.993477       1 server.go:483] "Version info" version="v1.31.2"
	I1205 19:04:07.993550       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:04:07.995149       1 config.go:199] "Starting service config controller"
	I1205 19:04:07.995178       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 19:04:07.995218       1 config.go:105] "Starting endpoint slice config controller"
	I1205 19:04:07.995226       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 19:04:08.076711       1 config.go:328] "Starting node config controller"
	I1205 19:04:08.077398       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 19:04:08.099654       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 19:04:08.099764       1 shared_informer.go:320] Caches are synced for service config
	I1205 19:04:08.178333       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c8f95dacee1a168b278b129a9eb984d048f980d893265abc98d694fb87903248] <==
	W1205 19:03:55.099983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 19:03:55.100002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.174269       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:55.174314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.175908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:03:55.175994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.176238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:03:55.176314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.176408       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:03:55.176442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.176410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:55.176477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.908182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 19:03:55.908229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.938598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:55.938633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:55.979992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 19:03:55.980026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:56.104980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:56.105018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:56.136297       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:03:56.136341       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 19:03:56.183653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:56.183695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1205 19:03:58.193231       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 19:09:57 addons-792804 kubelet[1637]: E1205 19:09:57.657172    1637 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425797656925071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:07 addons-792804 kubelet[1637]: E1205 19:10:07.659222    1637 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425807659059015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:07 addons-792804 kubelet[1637]: E1205 19:10:07.659256    1637 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425807659059015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:17 addons-792804 kubelet[1637]: E1205 19:10:17.660992    1637 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425817660788939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:17 addons-792804 kubelet[1637]: E1205 19:10:17.661034    1637 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425817660788939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:27 addons-792804 kubelet[1637]: E1205 19:10:27.663335    1637 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425827663152172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:27 addons-792804 kubelet[1637]: E1205 19:10:27.663380    1637 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425827663152172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:37 addons-792804 kubelet[1637]: I1205 19:10:37.483615    1637 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 19:10:37 addons-792804 kubelet[1637]: E1205 19:10:37.665198    1637 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425837665035431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:37 addons-792804 kubelet[1637]: E1205 19:10:37.665229    1637 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425837665035431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:47 addons-792804 kubelet[1637]: E1205 19:10:47.667270    1637 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425847667074762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:47 addons-792804 kubelet[1637]: E1205 19:10:47.667304    1637 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425847667074762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:57 addons-792804 kubelet[1637]: E1205 19:10:57.670203    1637 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425857669964886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:57 addons-792804 kubelet[1637]: E1205 19:10:57.670236    1637 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425857669964886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:07 addons-792804 kubelet[1637]: E1205 19:11:07.672574    1637 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425867672402194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:07 addons-792804 kubelet[1637]: E1205 19:11:07.672608    1637 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425867672402194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:17 addons-792804 kubelet[1637]: E1205 19:11:17.674507    1637 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425877674330241,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:17 addons-792804 kubelet[1637]: E1205 19:11:17.674545    1637 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425877674330241,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:27 addons-792804 kubelet[1637]: E1205 19:11:27.676741    1637 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425887676574417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:27 addons-792804 kubelet[1637]: E1205 19:11:27.676774    1637 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425887676574417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:37 addons-792804 kubelet[1637]: E1205 19:11:37.680942    1637 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425897679782603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:37 addons-792804 kubelet[1637]: E1205 19:11:37.680988    1637 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425897679782603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:44 addons-792804 kubelet[1637]: I1205 19:11:44.483758    1637 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 19:11:47 addons-792804 kubelet[1637]: E1205 19:11:47.683795    1637 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425907683548751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:47 addons-792804 kubelet[1637]: E1205 19:11:47.683838    1637 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425907683548751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626948,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [df26d92f96f4c720f762cb6554d0e782091843c51977a22bf90820c2cd4cef04] <==
	I1205 19:04:22.699229       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 19:04:22.708323       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 19:04:22.708436       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 19:04:22.721942       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 19:04:22.722587       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-792804_39a82853-bfdf-4929-b403-b15ceaa0319b!
	I1205 19:04:22.722709       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b0e8ed1c-7e3e-4708-a1c4-e02553bc7cd5", APIVersion:"v1", ResourceVersion:"892", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-792804_39a82853-bfdf-4929-b403-b15ceaa0319b became leader
	I1205 19:04:22.824646       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-792804_39a82853-bfdf-4929-b403-b15ceaa0319b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-792804 -n addons-792804
helpers_test.go:261: (dbg) Run:  kubectl --context addons-792804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (351.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (125.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-392363 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-392363 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m1.63237415s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:591: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-392363       NotReady   control-plane   8m54s   v1.31.2
	ha-392363-m02   Ready      control-plane   8m36s   v1.31.2
	ha-392363-m04   Ready      <none>          7m21s   v1.31.2

                                                
                                                
-- /stdout --
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:599: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-392363
helpers_test.go:235: (dbg) docker inspect ha-392363:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f7f53f006e83692f1d3a9e2e86dd9ef17065112b03082d050069a17ac072d47",
	        "Created": "2024-12-05T19:15:34.346998861Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1090007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-05T19:22:40.264185383Z",
	            "FinishedAt": "2024-12-05T19:22:39.56722046Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/3f7f53f006e83692f1d3a9e2e86dd9ef17065112b03082d050069a17ac072d47/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f7f53f006e83692f1d3a9e2e86dd9ef17065112b03082d050069a17ac072d47/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f7f53f006e83692f1d3a9e2e86dd9ef17065112b03082d050069a17ac072d47/hosts",
	        "LogPath": "/var/lib/docker/containers/3f7f53f006e83692f1d3a9e2e86dd9ef17065112b03082d050069a17ac072d47/3f7f53f006e83692f1d3a9e2e86dd9ef17065112b03082d050069a17ac072d47-json.log",
	        "Name": "/ha-392363",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-392363:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-392363",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62f641dcd63fe390f1f5f8a0ab5d3754454538d16da8ee95bb202f8d3a940035-init/diff:/var/lib/docker/overlay2/eeb994da5272b5c43f59ac5fc7f49f2b48f722f8f3da0a9c9746c4ff0b32901d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62f641dcd63fe390f1f5f8a0ab5d3754454538d16da8ee95bb202f8d3a940035/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62f641dcd63fe390f1f5f8a0ab5d3754454538d16da8ee95bb202f8d3a940035/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62f641dcd63fe390f1f5f8a0ab5d3754454538d16da8ee95bb202f8d3a940035/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-392363",
	                "Source": "/var/lib/docker/volumes/ha-392363/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-392363",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-392363",
	                "name.minikube.sigs.k8s.io": "ha-392363",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e17870d9fea1098027df4b0c691af2add7e65e8f1c547fa26a284119c8e91b15",
	            "SandboxKey": "/var/run/docker/netns/e17870d9fea1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-392363": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "9251d5f0ef750c1e841816fb9577edebe114f268a3f02464f2c8aadad6ca9fa9",
	                    "EndpointID": "68828ebee8eaf39cde9bda93b91800685aaa7f7b9c4d82f74b42d7f1d66e90e8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-392363",
	                        "3f7f53f006e8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-392363 -n ha-392363
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 logs -n 25
E1205 19:24:42.988138 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-392363 logs -n 25: (1.558374852s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-392363 cp ha-392363-m03:/home/docker/cp-test.txt                             | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | ha-392363-m04:/home/docker/cp-test_ha-392363-m03_ha-392363-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-392363 ssh -n                                                                | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | ha-392363-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-392363 ssh -n ha-392363-m04 sudo cat                                         | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | /home/docker/cp-test_ha-392363-m03_ha-392363-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-392363 cp testdata/cp-test.txt                                               | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | ha-392363-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-392363 ssh -n                                                                | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | ha-392363-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-392363 cp ha-392363-m04:/home/docker/cp-test.txt                             | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile343353057/001/cp-test_ha-392363-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-392363 ssh -n                                                                | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | ha-392363-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-392363 cp ha-392363-m04:/home/docker/cp-test.txt                             | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | ha-392363:/home/docker/cp-test_ha-392363-m04_ha-392363.txt                      |           |         |         |                     |                     |
	| ssh     | ha-392363 ssh -n                                                                | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | ha-392363-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-392363 ssh -n ha-392363 sudo cat                                             | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | /home/docker/cp-test_ha-392363-m04_ha-392363.txt                                |           |         |         |                     |                     |
	| cp      | ha-392363 cp ha-392363-m04:/home/docker/cp-test.txt                             | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | ha-392363-m02:/home/docker/cp-test_ha-392363-m04_ha-392363-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-392363 ssh -n                                                                | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | ha-392363-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-392363 ssh -n ha-392363-m02 sudo cat                                         | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | /home/docker/cp-test_ha-392363-m04_ha-392363-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-392363 cp ha-392363-m04:/home/docker/cp-test.txt                             | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | ha-392363-m03:/home/docker/cp-test_ha-392363-m04_ha-392363-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-392363 ssh -n                                                                | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | ha-392363-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-392363 ssh -n ha-392363-m03 sudo cat                                         | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:17 UTC |
	|         | /home/docker/cp-test_ha-392363-m04_ha-392363-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-392363 node stop m02 -v=7                                                    | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:17 UTC | 05 Dec 24 19:18 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-392363 node start m02 -v=7                                                   | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:18 UTC | 05 Dec 24 19:18 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-392363 -v=7                                                          | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-392363 -v=7                                                               | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:18 UTC | 05 Dec 24 19:19 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-392363 --wait=true -v=7                                                   | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:19 UTC | 05 Dec 24 19:21 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-392363                                                               | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:21 UTC |                     |
	| node    | ha-392363 node delete m03 -v=7                                                  | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:21 UTC | 05 Dec 24 19:22 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-392363 stop -v=7                                                             | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:22 UTC | 05 Dec 24 19:22 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-392363 --wait=true                                                        | ha-392363 | jenkins | v1.34.0 | 05 Dec 24 19:22 UTC | 05 Dec 24 19:24 UTC |
	|         | -v=7 --alsologtostderr                                                          |           |         |         |                     |                     |
	|         | --driver=docker                                                                 |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                        |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:22:39
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:22:39.995459 1089722 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:22:39.995605 1089722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:22:39.995614 1089722 out.go:358] Setting ErrFile to fd 2...
	I1205 19:22:39.995619 1089722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:22:39.995780 1089722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
	I1205 19:22:39.996268 1089722 out.go:352] Setting JSON to false
	I1205 19:22:39.997176 1089722 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":79511,"bootTime":1733347049,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:22:39.997283 1089722 start.go:139] virtualization: kvm guest
	I1205 19:22:39.999508 1089722 out.go:177] * [ha-392363] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:22:40.000685 1089722 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:22:40.000705 1089722 notify.go:220] Checking for updates...
	I1205 19:22:40.002848 1089722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:22:40.003836 1089722 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig
	I1205 19:22:40.004783 1089722 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube
	I1205 19:22:40.005779 1089722 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:22:40.006722 1089722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:22:40.008085 1089722 config.go:182] Loaded profile config "ha-392363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:22:40.008533 1089722 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:22:40.030303 1089722 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 19:22:40.030372 1089722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:22:40.076794 1089722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:42 SystemTime:2024-12-05 19:22:40.068264317 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:22:40.076921 1089722 docker.go:318] overlay module found
	I1205 19:22:40.078637 1089722 out.go:177] * Using the docker driver based on existing profile
	I1205 19:22:40.079851 1089722 start.go:297] selected driver: docker
	I1205 19:22:40.079868 1089722 start.go:901] validating driver "docker" against &{Name:ha-392363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-392363 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:22:40.079982 1089722 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:22:40.080050 1089722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:22:40.123848 1089722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:42 SystemTime:2024-12-05 19:22:40.115549457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:22:40.124587 1089722 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:22:40.124620 1089722 cni.go:84] Creating CNI manager for ""
	I1205 19:22:40.124681 1089722 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 19:22:40.124739 1089722 start.go:340] cluster config:
	{Name:ha-392363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-392363 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvi
dia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I1205 19:22:40.127437 1089722 out.go:177] * Starting "ha-392363" primary control-plane node in "ha-392363" cluster
	I1205 19:22:40.128606 1089722 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:22:40.129674 1089722 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1205 19:22:40.130676 1089722 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:22:40.130703 1089722 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1205 19:22:40.130713 1089722 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:22:40.130722 1089722 cache.go:56] Caching tarball of preloaded images
	I1205 19:22:40.130830 1089722 preload.go:172] Found /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:22:40.130843 1089722 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:22:40.130956 1089722 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/config.json ...
	I1205 19:22:40.148736 1089722 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
	I1205 19:22:40.148753 1089722 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
	I1205 19:22:40.148769 1089722 cache.go:194] Successfully downloaded all kic artifacts
	I1205 19:22:40.148797 1089722 start.go:360] acquireMachinesLock for ha-392363: {Name:mkdd06a58986f66e23938cb168655b1b533efe97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:22:40.148863 1089722 start.go:364] duration metric: took 39.83µs to acquireMachinesLock for "ha-392363"
	I1205 19:22:40.148885 1089722 start.go:96] Skipping create...Using existing machine configuration
	I1205 19:22:40.148892 1089722 fix.go:54] fixHost starting: 
	I1205 19:22:40.149081 1089722 cli_runner.go:164] Run: docker container inspect ha-392363 --format={{.State.Status}}
	I1205 19:22:40.164432 1089722 fix.go:112] recreateIfNeeded on ha-392363: state=Stopped err=<nil>
	W1205 19:22:40.164464 1089722 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 19:22:40.166289 1089722 out.go:177] * Restarting existing docker container for "ha-392363" ...
	I1205 19:22:40.167381 1089722 cli_runner.go:164] Run: docker start ha-392363
	I1205 19:22:40.426550 1089722 cli_runner.go:164] Run: docker container inspect ha-392363 --format={{.State.Status}}
	I1205 19:22:40.443890 1089722 kic.go:430] container "ha-392363" state is running.
	I1205 19:22:40.444321 1089722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-392363
	I1205 19:22:40.462331 1089722 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/config.json ...
	I1205 19:22:40.462523 1089722 machine.go:93] provisionDockerMachine start ...
	I1205 19:22:40.462586 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363
	I1205 19:22:40.480046 1089722 main.go:141] libmachine: Using SSH client type: native
	I1205 19:22:40.480289 1089722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1205 19:22:40.480307 1089722 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 19:22:40.481020 1089722 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45582->127.0.0.1:32828: read: connection reset by peer
	I1205 19:22:43.613386 1089722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-392363
	
	I1205 19:22:43.613424 1089722 ubuntu.go:169] provisioning hostname "ha-392363"
	I1205 19:22:43.613499 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363
	I1205 19:22:43.630763 1089722 main.go:141] libmachine: Using SSH client type: native
	I1205 19:22:43.630956 1089722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1205 19:22:43.630972 1089722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-392363 && echo "ha-392363" | sudo tee /etc/hostname
	I1205 19:22:43.768788 1089722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-392363
	
	I1205 19:22:43.768885 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363
	I1205 19:22:43.786560 1089722 main.go:141] libmachine: Using SSH client type: native
	I1205 19:22:43.786796 1089722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1205 19:22:43.786816 1089722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-392363' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-392363/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-392363' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:22:43.913973 1089722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:22:43.914026 1089722 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20052-999445/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-999445/.minikube}
	I1205 19:22:43.914053 1089722 ubuntu.go:177] setting up certificates
	I1205 19:22:43.914068 1089722 provision.go:84] configureAuth start
	I1205 19:22:43.914147 1089722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-392363
	I1205 19:22:43.930626 1089722 provision.go:143] copyHostCerts
	I1205 19:22:43.930666 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-999445/.minikube/ca.pem
	I1205 19:22:43.930711 1089722 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-999445/.minikube/ca.pem, removing ...
	I1205 19:22:43.930731 1089722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.pem
	I1205 19:22:43.930809 1089722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-999445/.minikube/ca.pem (1082 bytes)
	I1205 19:22:43.930923 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-999445/.minikube/cert.pem
	I1205 19:22:43.930954 1089722 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-999445/.minikube/cert.pem, removing ...
	I1205 19:22:43.930967 1089722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-999445/.minikube/cert.pem
	I1205 19:22:43.931007 1089722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-999445/.minikube/cert.pem (1123 bytes)
	I1205 19:22:43.931084 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-999445/.minikube/key.pem
	I1205 19:22:43.931118 1089722 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-999445/.minikube/key.pem, removing ...
	I1205 19:22:43.931128 1089722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-999445/.minikube/key.pem
	I1205 19:22:43.931163 1089722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-999445/.minikube/key.pem (1675 bytes)
	I1205 19:22:43.931250 1089722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca-key.pem org=jenkins.ha-392363 san=[127.0.0.1 192.168.49.2 ha-392363 localhost minikube]
	I1205 19:22:44.127198 1089722 provision.go:177] copyRemoteCerts
	I1205 19:22:44.127263 1089722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:22:44.127298 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363
	I1205 19:22:44.144114 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363/id_rsa Username:docker}
	I1205 19:22:44.238094 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:22:44.238165 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 19:22:44.259060 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:22:44.259127 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1205 19:22:44.279756 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:22:44.279807 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:22:44.300070 1089722 provision.go:87] duration metric: took 385.970988ms to configureAuth
	I1205 19:22:44.300104 1089722 ubuntu.go:193] setting minikube options for container-runtime
	I1205 19:22:44.300379 1089722 config.go:182] Loaded profile config "ha-392363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:22:44.300509 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363
	I1205 19:22:44.317190 1089722 main.go:141] libmachine: Using SSH client type: native
	I1205 19:22:44.317368 1089722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1205 19:22:44.317384 1089722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:22:44.640263 1089722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:22:44.640288 1089722 machine.go:96] duration metric: took 4.177751983s to provisionDockerMachine
	I1205 19:22:44.640303 1089722 start.go:293] postStartSetup for "ha-392363" (driver="docker")
	I1205 19:22:44.640313 1089722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:22:44.640368 1089722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:22:44.640413 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363
	I1205 19:22:44.658743 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363/id_rsa Username:docker}
	I1205 19:22:44.750828 1089722 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:22:44.753925 1089722 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 19:22:44.753955 1089722 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 19:22:44.753963 1089722 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 19:22:44.753971 1089722 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1205 19:22:44.753983 1089722 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-999445/.minikube/addons for local assets ...
	I1205 19:22:44.754060 1089722 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-999445/.minikube/files for local assets ...
	I1205 19:22:44.754169 1089722 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem -> 10063152.pem in /etc/ssl/certs
	I1205 19:22:44.754186 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem -> /etc/ssl/certs/10063152.pem
	I1205 19:22:44.754303 1089722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:22:44.761832 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem --> /etc/ssl/certs/10063152.pem (1708 bytes)
	I1205 19:22:44.782948 1089722 start.go:296] duration metric: took 142.633226ms for postStartSetup
	I1205 19:22:44.783010 1089722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:22:44.783048 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363
	I1205 19:22:44.800563 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363/id_rsa Username:docker}
	I1205 19:22:44.886740 1089722 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 19:22:44.890782 1089722 fix.go:56] duration metric: took 4.741881801s for fixHost
	I1205 19:22:44.890814 1089722 start.go:83] releasing machines lock for "ha-392363", held for 4.741936894s
	I1205 19:22:44.890889 1089722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-392363
	I1205 19:22:44.908508 1089722 ssh_runner.go:195] Run: cat /version.json
	I1205 19:22:44.908552 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363
	I1205 19:22:44.908560 1089722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:22:44.908655 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363
	I1205 19:22:44.926999 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363/id_rsa Username:docker}
	I1205 19:22:44.927467 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363/id_rsa Username:docker}
	I1205 19:22:45.013701 1089722 ssh_runner.go:195] Run: systemctl --version
	I1205 19:22:45.017862 1089722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:22:45.155982 1089722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 19:22:45.160310 1089722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:22:45.168580 1089722 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 19:22:45.168639 1089722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:22:45.176508 1089722 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 19:22:45.176540 1089722 start.go:495] detecting cgroup driver to use...
	I1205 19:22:45.176580 1089722 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 19:22:45.176650 1089722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:22:45.187879 1089722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:22:45.197488 1089722 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:22:45.197539 1089722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:22:45.208478 1089722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:22:45.218029 1089722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:22:45.293572 1089722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:22:45.369120 1089722 docker.go:233] disabling docker service ...
	I1205 19:22:45.369178 1089722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:22:45.380035 1089722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:22:45.389535 1089722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:22:45.461762 1089722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:22:45.533791 1089722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:22:45.543866 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:22:45.558156 1089722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:22:45.558209 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:22:45.566857 1089722 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:22:45.566917 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:22:45.575319 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:22:45.583671 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:22:45.591995 1089722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:22:45.599593 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:22:45.607882 1089722 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:22:45.616464 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:22:45.625337 1089722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:22:45.632582 1089722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:22:45.639875 1089722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:22:45.717671 1089722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:22:45.819090 1089722 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:22:45.819180 1089722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:22:45.822603 1089722 start.go:563] Will wait 60s for crictl version
	I1205 19:22:45.822654 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:22:45.825650 1089722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:22:45.857348 1089722 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 19:22:45.857434 1089722 ssh_runner.go:195] Run: crio --version
	I1205 19:22:45.891983 1089722 ssh_runner.go:195] Run: crio --version
	I1205 19:22:45.925770 1089722 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1205 19:22:45.927000 1089722 cli_runner.go:164] Run: docker network inspect ha-392363 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:22:45.943418 1089722 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 19:22:45.946900 1089722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:22:45.957476 1089722 kubeadm.go:883] updating cluster {Name:ha-392363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-392363 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevi
rt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 19:22:45.957628 1089722 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:22:45.957681 1089722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:22:45.998807 1089722 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:22:45.998831 1089722 crio.go:433] Images already preloaded, skipping extraction
	I1205 19:22:45.998875 1089722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:22:46.030890 1089722 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:22:46.030915 1089722 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:22:46.030924 1089722 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1205 19:22:46.031027 1089722 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-392363 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-392363 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:22:46.031090 1089722 ssh_runner.go:195] Run: crio config
	I1205 19:22:46.072050 1089722 cni.go:84] Creating CNI manager for ""
	I1205 19:22:46.072072 1089722 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 19:22:46.072088 1089722 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 19:22:46.072116 1089722 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-392363 NodeName:ha-392363 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:22:46.072236 1089722 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-392363"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:22:46.072255 1089722 kube-vip.go:115] generating kube-vip config ...
	I1205 19:22:46.072294 1089722 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1205 19:22:46.083734 1089722 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1205 19:22:46.083849 1089722 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:22:46.083898 1089722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:22:46.091570 1089722 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:22:46.091629 1089722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 19:22:46.099046 1089722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1205 19:22:46.114234 1089722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:22:46.129198 1089722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2283 bytes)
	I1205 19:22:46.144245 1089722 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1205 19:22:46.159746 1089722 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:22:46.162970 1089722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:22:46.172405 1089722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:22:46.249231 1089722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:22:46.260920 1089722 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363 for IP: 192.168.49.2
	I1205 19:22:46.260940 1089722 certs.go:194] generating shared ca certs ...
	I1205 19:22:46.260955 1089722 certs.go:226] acquiring lock for ca certs: {Name:mk27706fe4627f850c07ffcdfc76cdd3f60bd8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:22:46.261124 1089722 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key
	I1205 19:22:46.261178 1089722 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key
	I1205 19:22:46.261193 1089722 certs.go:256] generating profile certs ...
	I1205 19:22:46.261291 1089722 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/client.key
	I1205 19:22:46.261323 1089722 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.key.b05489d2
	I1205 19:22:46.261344 1089722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.crt.b05489d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1205 19:22:46.455903 1089722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.crt.b05489d2 ...
	I1205 19:22:46.455937 1089722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.crt.b05489d2: {Name:mk06f64f4e14d1225f2a05fd571cca271cbc92da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:22:46.456144 1089722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.key.b05489d2 ...
	I1205 19:22:46.456166 1089722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.key.b05489d2: {Name:mk4073959c5bb9caf0e8dd25ff9a9f9a47670b3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:22:46.456288 1089722 certs.go:381] copying /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.crt.b05489d2 -> /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.crt
	I1205 19:22:46.456441 1089722 certs.go:385] copying /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.key.b05489d2 -> /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.key
	I1205 19:22:46.456590 1089722 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/proxy-client.key
	I1205 19:22:46.456608 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:22:46.456623 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:22:46.456640 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:22:46.456656 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:22:46.456671 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:22:46.456690 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:22:46.456705 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:22:46.456720 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:22:46.456774 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/1006315.pem (1338 bytes)
	W1205 19:22:46.456807 1089722 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-999445/.minikube/certs/1006315_empty.pem, impossibly tiny 0 bytes
	I1205 19:22:46.456819 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 19:22:46.456845 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem (1082 bytes)
	I1205 19:22:46.456903 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:22:46.456937 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem (1675 bytes)
	I1205 19:22:46.456982 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem (1708 bytes)
	I1205 19:22:46.457018 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/1006315.pem -> /usr/share/ca-certificates/1006315.pem
	I1205 19:22:46.457035 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem -> /usr/share/ca-certificates/10063152.pem
	I1205 19:22:46.457050 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:22:46.457667 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:22:46.479683 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:22:46.500315 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:22:46.520634 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:22:46.540739 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 19:22:46.560999 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 19:22:46.581426 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:22:46.601461 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:22:46.621755 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/certs/1006315.pem --> /usr/share/ca-certificates/1006315.pem (1338 bytes)
	I1205 19:22:46.641825 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem --> /usr/share/ca-certificates/10063152.pem (1708 bytes)
	I1205 19:22:46.662149 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:22:46.682730 1089722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:22:46.698061 1089722 ssh_runner.go:195] Run: openssl version
	I1205 19:22:46.702779 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1006315.pem && ln -fs /usr/share/ca-certificates/1006315.pem /etc/ssl/certs/1006315.pem"
	I1205 19:22:46.710810 1089722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1006315.pem
	I1205 19:22:46.713684 1089722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:12 /usr/share/ca-certificates/1006315.pem
	I1205 19:22:46.713732 1089722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1006315.pem
	I1205 19:22:46.719636 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1006315.pem /etc/ssl/certs/51391683.0"
	I1205 19:22:46.727267 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10063152.pem && ln -fs /usr/share/ca-certificates/10063152.pem /etc/ssl/certs/10063152.pem"
	I1205 19:22:46.735076 1089722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10063152.pem
	I1205 19:22:46.738061 1089722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:12 /usr/share/ca-certificates/10063152.pem
	I1205 19:22:46.738152 1089722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10063152.pem
	I1205 19:22:46.744219 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10063152.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:22:46.751562 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:22:46.759664 1089722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:22:46.762676 1089722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:22:46.762709 1089722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:22:46.768625 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:22:46.775937 1089722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:22:46.778933 1089722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 19:22:46.784727 1089722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 19:22:46.790489 1089722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 19:22:46.796187 1089722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 19:22:46.801760 1089722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 19:22:46.807407 1089722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 19:22:46.813188 1089722 kubeadm.go:392] StartCluster: {Name:ha-392363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-392363 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:
false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketV
MnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:22:46.813296 1089722 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:22:46.813343 1089722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:22:46.846023 1089722 cri.go:89] found id: ""
	I1205 19:22:46.846090 1089722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:22:46.853872 1089722 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 19:22:46.853887 1089722 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 19:22:46.853917 1089722 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 19:22:46.861032 1089722 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 19:22:46.861405 1089722 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-392363" does not appear in /home/jenkins/minikube-integration/20052-999445/kubeconfig
	I1205 19:22:46.861500 1089722 kubeconfig.go:62] /home/jenkins/minikube-integration/20052-999445/kubeconfig needs updating (will repair): [kubeconfig missing "ha-392363" cluster setting kubeconfig missing "ha-392363" context setting]
	I1205 19:22:46.861752 1089722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/kubeconfig: {Name:mk9f3e1f3f15e579e42360c3cd96b3ca0e071da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:22:46.862161 1089722 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20052-999445/kubeconfig
	I1205 19:22:46.862397 1089722 kapi.go:59] client config for ha-392363: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/client.key", CAFile:"/home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 19:22:46.862795 1089722 cert_rotation.go:140] Starting client certificate rotation controller
	I1205 19:22:46.863050 1089722 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 19:22:46.870333 1089722 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I1205 19:22:46.870349 1089722 kubeadm.go:597] duration metric: took 16.457299ms to restartPrimaryControlPlane
	I1205 19:22:46.870356 1089722 kubeadm.go:394] duration metric: took 57.177655ms to StartCluster
	I1205 19:22:46.870373 1089722 settings.go:142] acquiring lock: {Name:mk8cc47684b2d9b56f7c67a506188e087d04cea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:22:46.870428 1089722 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-999445/kubeconfig
	I1205 19:22:46.870947 1089722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/kubeconfig: {Name:mk9f3e1f3f15e579e42360c3cd96b3ca0e071da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:22:46.871124 1089722 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:22:46.871144 1089722 start.go:241] waiting for startup goroutines ...
	I1205 19:22:46.871152 1089722 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 19:22:46.871374 1089722 config.go:182] Loaded profile config "ha-392363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:22:46.874093 1089722 out.go:177] * Enabled addons: 
	I1205 19:22:46.875265 1089722 addons.go:510] duration metric: took 4.112467ms for enable addons: enabled=[]
	I1205 19:22:46.875291 1089722 start.go:246] waiting for cluster config update ...
	I1205 19:22:46.875308 1089722 start.go:255] writing updated cluster config ...
	I1205 19:22:46.876659 1089722 out.go:201] 
	I1205 19:22:46.877868 1089722 config.go:182] Loaded profile config "ha-392363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:22:46.877955 1089722 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/config.json ...
	I1205 19:22:46.879387 1089722 out.go:177] * Starting "ha-392363-m02" control-plane node in "ha-392363" cluster
	I1205 19:22:46.880586 1089722 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:22:46.881748 1089722 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1205 19:22:46.882836 1089722 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:22:46.882853 1089722 cache.go:56] Caching tarball of preloaded images
	I1205 19:22:46.882869 1089722 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1205 19:22:46.882932 1089722 preload.go:172] Found /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:22:46.882945 1089722 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:22:46.883031 1089722 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/config.json ...
	I1205 19:22:46.901370 1089722 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
	I1205 19:22:46.901386 1089722 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
	I1205 19:22:46.901401 1089722 cache.go:194] Successfully downloaded all kic artifacts
	I1205 19:22:46.901422 1089722 start.go:360] acquireMachinesLock for ha-392363-m02: {Name:mkde8218c11995fb6f424f9c825ff17e9ffff4eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:22:46.901466 1089722 start.go:364] duration metric: took 30.49µs to acquireMachinesLock for "ha-392363-m02"
	I1205 19:22:46.901482 1089722 start.go:96] Skipping create...Using existing machine configuration
	I1205 19:22:46.901490 1089722 fix.go:54] fixHost starting: m02
	I1205 19:22:46.901676 1089722 cli_runner.go:164] Run: docker container inspect ha-392363-m02 --format={{.State.Status}}
	I1205 19:22:46.916818 1089722 fix.go:112] recreateIfNeeded on ha-392363-m02: state=Stopped err=<nil>
	W1205 19:22:46.916839 1089722 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 19:22:46.918471 1089722 out.go:177] * Restarting existing docker container for "ha-392363-m02" ...
	I1205 19:22:46.919718 1089722 cli_runner.go:164] Run: docker start ha-392363-m02
	I1205 19:22:47.174410 1089722 cli_runner.go:164] Run: docker container inspect ha-392363-m02 --format={{.State.Status}}
	I1205 19:22:47.191880 1089722 kic.go:430] container "ha-392363-m02" state is running.
	I1205 19:22:47.192237 1089722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-392363-m02
	I1205 19:22:47.210073 1089722 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/config.json ...
	I1205 19:22:47.210275 1089722 machine.go:93] provisionDockerMachine start ...
	I1205 19:22:47.210333 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m02
	I1205 19:22:47.227848 1089722 main.go:141] libmachine: Using SSH client type: native
	I1205 19:22:47.228109 1089722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1205 19:22:47.228130 1089722 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 19:22:47.228882 1089722 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58006->127.0.0.1:32833: read: connection reset by peer
	I1205 19:22:50.353443 1089722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-392363-m02
	
	I1205 19:22:50.353474 1089722 ubuntu.go:169] provisioning hostname "ha-392363-m02"
	I1205 19:22:50.353543 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m02
	I1205 19:22:50.371735 1089722 main.go:141] libmachine: Using SSH client type: native
	I1205 19:22:50.371913 1089722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1205 19:22:50.371925 1089722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-392363-m02 && echo "ha-392363-m02" | sudo tee /etc/hostname
	I1205 19:22:50.508322 1089722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-392363-m02
	
	I1205 19:22:50.508419 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m02
	I1205 19:22:50.525395 1089722 main.go:141] libmachine: Using SSH client type: native
	I1205 19:22:50.525583 1089722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1205 19:22:50.525607 1089722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-392363-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-392363-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-392363-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:22:50.649658 1089722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:22:50.649693 1089722 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20052-999445/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-999445/.minikube}
	I1205 19:22:50.649714 1089722 ubuntu.go:177] setting up certificates
	I1205 19:22:50.649728 1089722 provision.go:84] configureAuth start
	I1205 19:22:50.649791 1089722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-392363-m02
	I1205 19:22:50.666349 1089722 provision.go:143] copyHostCerts
	I1205 19:22:50.666391 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-999445/.minikube/ca.pem
	I1205 19:22:50.666428 1089722 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-999445/.minikube/ca.pem, removing ...
	I1205 19:22:50.666438 1089722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.pem
	I1205 19:22:50.666516 1089722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-999445/.minikube/ca.pem (1082 bytes)
	I1205 19:22:50.666607 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-999445/.minikube/cert.pem
	I1205 19:22:50.666634 1089722 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-999445/.minikube/cert.pem, removing ...
	I1205 19:22:50.666645 1089722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-999445/.minikube/cert.pem
	I1205 19:22:50.666687 1089722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-999445/.minikube/cert.pem (1123 bytes)
	I1205 19:22:50.666754 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-999445/.minikube/key.pem
	I1205 19:22:50.666779 1089722 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-999445/.minikube/key.pem, removing ...
	I1205 19:22:50.666788 1089722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-999445/.minikube/key.pem
	I1205 19:22:50.666821 1089722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-999445/.minikube/key.pem (1675 bytes)
	I1205 19:22:50.666886 1089722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca-key.pem org=jenkins.ha-392363-m02 san=[127.0.0.1 192.168.49.3 ha-392363-m02 localhost minikube]
	I1205 19:22:50.889032 1089722 provision.go:177] copyRemoteCerts
	I1205 19:22:50.889094 1089722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:22:50.889148 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m02
	I1205 19:22:50.906388 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363-m02/id_rsa Username:docker}
	I1205 19:22:50.998184 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:22:50.998247 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 19:22:51.019376 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:22:51.019442 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 19:22:51.040288 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:22:51.040354 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 19:22:51.060643 1089722 provision.go:87] duration metric: took 410.903087ms to configureAuth
	I1205 19:22:51.060667 1089722 ubuntu.go:193] setting minikube options for container-runtime
	I1205 19:22:51.060851 1089722 config.go:182] Loaded profile config "ha-392363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:22:51.060947 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m02
	I1205 19:22:51.077392 1089722 main.go:141] libmachine: Using SSH client type: native
	I1205 19:22:51.077571 1089722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1205 19:22:51.077587 1089722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:22:51.393700 1089722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:22:51.393740 1089722 machine.go:96] duration metric: took 4.18345047s to provisionDockerMachine
	I1205 19:22:51.393755 1089722 start.go:293] postStartSetup for "ha-392363-m02" (driver="docker")
	I1205 19:22:51.393765 1089722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:22:51.393815 1089722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:22:51.393855 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m02
	I1205 19:22:51.411426 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363-m02/id_rsa Username:docker}
	I1205 19:22:51.503476 1089722 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:22:51.506641 1089722 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 19:22:51.506670 1089722 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 19:22:51.506678 1089722 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 19:22:51.506685 1089722 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1205 19:22:51.506698 1089722 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-999445/.minikube/addons for local assets ...
	I1205 19:22:51.506748 1089722 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-999445/.minikube/files for local assets ...
	I1205 19:22:51.506840 1089722 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem -> 10063152.pem in /etc/ssl/certs
	I1205 19:22:51.506851 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem -> /etc/ssl/certs/10063152.pem
	I1205 19:22:51.506930 1089722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:22:51.514538 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem --> /etc/ssl/certs/10063152.pem (1708 bytes)
	I1205 19:22:51.536449 1089722 start.go:296] duration metric: took 142.679439ms for postStartSetup
	I1205 19:22:51.536533 1089722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:22:51.536582 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m02
	I1205 19:22:51.553677 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363-m02/id_rsa Username:docker}
	I1205 19:22:51.646740 1089722 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 19:22:51.650998 1089722 fix.go:56] duration metric: took 4.749501873s for fixHost
	I1205 19:22:51.651024 1089722 start.go:83] releasing machines lock for "ha-392363-m02", held for 4.749547734s
	I1205 19:22:51.651095 1089722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-392363-m02
	I1205 19:22:51.669180 1089722 out.go:177] * Found network options:
	I1205 19:22:51.670599 1089722 out.go:177]   - NO_PROXY=192.168.49.2
	W1205 19:22:51.671826 1089722 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 19:22:51.671863 1089722 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:22:51.671929 1089722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:22:51.671982 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m02
	I1205 19:22:51.672003 1089722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:22:51.672069 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m02
	I1205 19:22:51.689619 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363-m02/id_rsa Username:docker}
	I1205 19:22:51.690288 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363-m02/id_rsa Username:docker}
	I1205 19:22:51.979937 1089722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 19:22:51.987563 1089722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:22:51.999850 1089722 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 19:22:51.999939 1089722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:22:52.076951 1089722 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 19:22:52.076983 1089722 start.go:495] detecting cgroup driver to use...
	I1205 19:22:52.077020 1089722 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 19:22:52.077071 1089722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:22:52.093734 1089722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:22:52.177140 1089722 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:22:52.177219 1089722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:22:52.193946 1089722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:22:52.207559 1089722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:22:52.576922 1089722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:22:52.876546 1089722 docker.go:233] disabling docker service ...
	I1205 19:22:52.876619 1089722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:22:52.905985 1089722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:22:52.975479 1089722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:22:53.285836 1089722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:22:53.596907 1089722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:22:53.611254 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:22:53.683398 1089722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:22:53.683463 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:22:53.695307 1089722 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:22:53.695415 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:22:53.709191 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:22:53.775800 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:22:53.791446 1089722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:22:53.803625 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:22:53.813008 1089722 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:22:53.821466 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:22:53.830192 1089722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:22:53.884352 1089722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:22:53.894539 1089722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:22:54.179888 1089722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:22:55.488972 1089722 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.309042397s)
	I1205 19:22:55.489054 1089722 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:22:55.489121 1089722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:22:55.493899 1089722 start.go:563] Will wait 60s for crictl version
	I1205 19:22:55.493968 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:22:55.497584 1089722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:22:55.534142 1089722 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 19:22:55.534224 1089722 ssh_runner.go:195] Run: crio --version
	I1205 19:22:55.566404 1089722 ssh_runner.go:195] Run: crio --version
	I1205 19:22:55.606681 1089722 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1205 19:22:55.607899 1089722 out.go:177]   - env NO_PROXY=192.168.49.2
	I1205 19:22:55.609014 1089722 cli_runner.go:164] Run: docker network inspect ha-392363 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:22:55.633942 1089722 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 19:22:55.637498 1089722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:22:55.647684 1089722 mustload.go:65] Loading cluster: ha-392363
	I1205 19:22:55.647893 1089722 config.go:182] Loaded profile config "ha-392363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:22:55.648094 1089722 cli_runner.go:164] Run: docker container inspect ha-392363 --format={{.State.Status}}
	I1205 19:22:55.664230 1089722 host.go:66] Checking if "ha-392363" exists ...
	I1205 19:22:55.664477 1089722 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363 for IP: 192.168.49.3
	I1205 19:22:55.664489 1089722 certs.go:194] generating shared ca certs ...
	I1205 19:22:55.664503 1089722 certs.go:226] acquiring lock for ca certs: {Name:mk27706fe4627f850c07ffcdfc76cdd3f60bd8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:22:55.664614 1089722 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key
	I1205 19:22:55.664649 1089722 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key
	I1205 19:22:55.664658 1089722 certs.go:256] generating profile certs ...
	I1205 19:22:55.664726 1089722 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/client.key
	I1205 19:22:55.664779 1089722 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.key.0b7fe464
	I1205 19:22:55.664813 1089722 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/proxy-client.key
	I1205 19:22:55.664824 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:22:55.664838 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:22:55.664851 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:22:55.664862 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:22:55.664875 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:22:55.664887 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:22:55.664900 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:22:55.664912 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:22:55.664961 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/1006315.pem (1338 bytes)
	W1205 19:22:55.664988 1089722 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-999445/.minikube/certs/1006315_empty.pem, impossibly tiny 0 bytes
	I1205 19:22:55.664998 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 19:22:55.665025 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem (1082 bytes)
	I1205 19:22:55.665049 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:22:55.665069 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem (1675 bytes)
	I1205 19:22:55.665106 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem (1708 bytes)
	I1205 19:22:55.665136 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/1006315.pem -> /usr/share/ca-certificates/1006315.pem
	I1205 19:22:55.665150 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem -> /usr/share/ca-certificates/10063152.pem
	I1205 19:22:55.665162 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:22:55.665203 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363
	I1205 19:22:55.687245 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363/id_rsa Username:docker}
	I1205 19:22:55.770222 1089722 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 19:22:55.773741 1089722 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 19:22:55.785449 1089722 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 19:22:55.788405 1089722 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 19:22:55.799345 1089722 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 19:22:55.802130 1089722 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 19:22:55.812703 1089722 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 19:22:55.815537 1089722 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1205 19:22:55.826034 1089722 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 19:22:55.828801 1089722 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 19:22:55.839457 1089722 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 19:22:55.842354 1089722 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1205 19:22:55.852859 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:22:55.873680 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:22:55.895591 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:22:55.918820 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:22:55.940990 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 19:22:55.982527 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 19:22:56.003977 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:22:56.025203 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:22:56.045909 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/certs/1006315.pem --> /usr/share/ca-certificates/1006315.pem (1338 bytes)
	I1205 19:22:56.066481 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem --> /usr/share/ca-certificates/10063152.pem (1708 bytes)
	I1205 19:22:56.086597 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:22:56.107209 1089722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 19:22:56.122730 1089722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 19:22:56.137968 1089722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 19:22:56.152755 1089722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1205 19:22:56.167673 1089722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 19:22:56.183029 1089722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1205 19:22:56.198322 1089722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 19:22:56.213845 1089722 ssh_runner.go:195] Run: openssl version
	I1205 19:22:56.218732 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1006315.pem && ln -fs /usr/share/ca-certificates/1006315.pem /etc/ssl/certs/1006315.pem"
	I1205 19:22:56.226947 1089722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1006315.pem
	I1205 19:22:56.230175 1089722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:12 /usr/share/ca-certificates/1006315.pem
	I1205 19:22:56.230218 1089722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1006315.pem
	I1205 19:22:56.236290 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1006315.pem /etc/ssl/certs/51391683.0"
	I1205 19:22:56.243755 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10063152.pem && ln -fs /usr/share/ca-certificates/10063152.pem /etc/ssl/certs/10063152.pem"
	I1205 19:22:56.252143 1089722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10063152.pem
	I1205 19:22:56.255166 1089722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:12 /usr/share/ca-certificates/10063152.pem
	I1205 19:22:56.255216 1089722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10063152.pem
	I1205 19:22:56.261605 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10063152.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:22:56.269448 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:22:56.277845 1089722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:22:56.280875 1089722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:22:56.280922 1089722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:22:56.287138 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:22:56.295001 1089722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:22:56.298004 1089722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 19:22:56.304143 1089722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 19:22:56.310412 1089722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 19:22:56.316404 1089722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 19:22:56.322364 1089722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 19:22:56.328135 1089722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 19:22:56.333904 1089722 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.2 crio true true} ...
	I1205 19:22:56.334020 1089722 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-392363-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-392363 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:22:56.334055 1089722 kube-vip.go:115] generating kube-vip config ...
	I1205 19:22:56.334094 1089722 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1205 19:22:56.344859 1089722 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1205 19:22:56.344911 1089722 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:22:56.344957 1089722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:22:56.353006 1089722 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:22:56.353046 1089722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 19:22:56.360623 1089722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1205 19:22:56.376783 1089722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:22:56.393332 1089722 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1205 19:22:56.409269 1089722 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:22:56.412195 1089722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:22:56.421953 1089722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:22:56.511718 1089722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:22:56.522093 1089722 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:22:56.522428 1089722 config.go:182] Loaded profile config "ha-392363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:22:56.525077 1089722 out.go:177] * Verifying Kubernetes components...
	I1205 19:22:56.526279 1089722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:22:56.621981 1089722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:22:56.633408 1089722 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20052-999445/kubeconfig
	I1205 19:22:56.633662 1089722 kapi.go:59] client config for ha-392363: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/client.key", CAFile:"/home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 19:22:56.633724 1089722 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1205 19:22:56.633982 1089722 node_ready.go:35] waiting up to 6m0s for node "ha-392363-m02" to be "Ready" ...
	I1205 19:22:56.634147 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:22:56.634167 1089722 round_trippers.go:469] Request Headers:
	I1205 19:22:56.634181 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:56.634192 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:08.522401 1089722 round_trippers.go:574] Response Status: 500 Internal Server Error in 11888 milliseconds
	I1205 19:23:08.522696 1089722 node_ready.go:53] error getting node "ha-392363-m02": etcdserver: request timed out
	I1205 19:23:08.522786 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:23:08.522797 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:08.522808 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:08.522818 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:17.206117 1089722 round_trippers.go:574] Response Status: 500 Internal Server Error in 8683 milliseconds
	I1205 19:23:17.206277 1089722 node_ready.go:53] error getting node "ha-392363-m02": etcdserver: leader changed
	I1205 19:23:17.206371 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:23:17.206382 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:17.206394 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:17.206405 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:17.208713 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:17.209751 1089722 node_ready.go:49] node "ha-392363-m02" has status "Ready":"True"
	I1205 19:23:17.209780 1089722 node_ready.go:38] duration metric: took 20.575746062s for node "ha-392363-m02" to be "Ready" ...
	I1205 19:23:17.209793 1089722 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:23:17.209850 1089722 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 19:23:17.209868 1089722 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 19:23:17.209938 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1205 19:23:17.209948 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:17.209959 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:17.209965 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:17.211904 1089722 round_trippers.go:574] Response Status: 429 Too Many Requests in 1 milliseconds
	I1205 19:23:18.212411 1089722 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1205 19:23:18.212474 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1205 19:23:18.212493 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:18.212512 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:18.212518 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:18.223623 1089722 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1205 19:23:18.232479 1089722 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:18.232570 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:23:18.232578 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:18.232586 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:18.232589 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:18.234457 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:23:18.235053 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:23:18.235071 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:18.235080 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:18.235084 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:18.236913 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:23:18.237334 1089722 pod_ready.go:93] pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace has status "Ready":"True"
	I1205 19:23:18.237352 1089722 pod_ready.go:82] duration metric: took 4.849114ms for pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:18.237365 1089722 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4wfjm" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:18.237430 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4wfjm
	I1205 19:23:18.237439 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:18.237446 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:18.237451 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:18.239471 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:18.239985 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:23:18.240000 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:18.240007 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:18.240015 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:18.241897 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:23:18.242454 1089722 pod_ready.go:93] pod "coredns-7c65d6cfc9-4wfjm" in "kube-system" namespace has status "Ready":"True"
	I1205 19:23:18.242477 1089722 pod_ready.go:82] duration metric: took 5.104091ms for pod "coredns-7c65d6cfc9-4wfjm" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:18.242491 1089722 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-392363" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:18.242560 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-392363
	I1205 19:23:18.242572 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:18.242584 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:18.242594 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:18.244181 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:23:18.244689 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:23:18.244706 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:18.244712 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:18.244716 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:18.246475 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:23:18.247004 1089722 pod_ready.go:93] pod "etcd-ha-392363" in "kube-system" namespace has status "Ready":"True"
	I1205 19:23:18.247024 1089722 pod_ready.go:82] duration metric: took 4.5202ms for pod "etcd-ha-392363" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:18.247036 1089722 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:18.247110 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-392363-m02
	I1205 19:23:18.247122 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:18.247134 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:18.247145 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:18.248835 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:23:18.249457 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:23:18.249476 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:18.249489 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:18.249497 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:18.251119 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:23:18.251523 1089722 pod_ready.go:93] pod "etcd-ha-392363-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:23:18.251539 1089722 pod_ready.go:82] duration metric: took 4.49391ms for pod "etcd-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:18.251547 1089722 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-392363-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:18.251591 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-392363-m03
	I1205 19:23:18.251598 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:18.251605 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:18.251612 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:18.253298 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:23:18.413023 1089722 request.go:632] Waited for 159.292334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363-m03
	I1205 19:23:18.413104 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m03
	I1205 19:23:18.413115 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:18.413123 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:18.413130 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:18.415519 1089722 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1205 19:23:18.415667 1089722 pod_ready.go:98] node "ha-392363-m03" hosting pod "etcd-ha-392363-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-392363-m03": nodes "ha-392363-m03" not found
	I1205 19:23:18.415690 1089722 pod_ready.go:82] duration metric: took 164.134251ms for pod "etcd-ha-392363-m03" in "kube-system" namespace to be "Ready" ...
	E1205 19:23:18.415705 1089722 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-392363-m03" hosting pod "etcd-ha-392363-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-392363-m03": nodes "ha-392363-m03" not found
	I1205 19:23:18.415735 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-392363" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:18.613067 1089722 request.go:632] Waited for 197.214106ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-392363
	I1205 19:23:18.613142 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-392363
	I1205 19:23:18.613149 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:18.613159 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:18.613166 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:18.615851 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:18.812779 1089722 request.go:632] Waited for 196.165901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:23:18.812906 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:23:18.812918 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:18.812931 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:18.812937 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:18.815169 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:18.815786 1089722 pod_ready.go:93] pod "kube-apiserver-ha-392363" in "kube-system" namespace has status "Ready":"True"
	I1205 19:23:18.815810 1089722 pod_ready.go:82] duration metric: took 400.06065ms for pod "kube-apiserver-ha-392363" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:18.815823 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:19.013254 1089722 request.go:632] Waited for 197.33403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-392363-m02
	I1205 19:23:19.013348 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-392363-m02
	I1205 19:23:19.013360 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:19.013373 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:19.013383 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:19.016416 1089722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:23:19.212490 1089722 request.go:632] Waited for 195.285311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:23:19.212563 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:23:19.212573 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:19.212585 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:19.212594 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:19.215028 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:19.215646 1089722 pod_ready.go:93] pod "kube-apiserver-ha-392363-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:23:19.215672 1089722 pod_ready.go:82] duration metric: took 399.839513ms for pod "kube-apiserver-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:19.215685 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-392363-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:19.412542 1089722 request.go:632] Waited for 196.759266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-392363-m03
	I1205 19:23:19.412610 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-392363-m03
	I1205 19:23:19.412621 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:19.412635 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:19.412642 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:19.415249 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:19.613234 1089722 request.go:632] Waited for 197.235304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363-m03
	I1205 19:23:19.613309 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m03
	I1205 19:23:19.613320 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:19.613333 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:19.613343 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:19.616214 1089722 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1205 19:23:19.616366 1089722 pod_ready.go:98] node "ha-392363-m03" hosting pod "kube-apiserver-ha-392363-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-392363-m03": nodes "ha-392363-m03" not found
	I1205 19:23:19.616390 1089722 pod_ready.go:82] duration metric: took 400.691514ms for pod "kube-apiserver-ha-392363-m03" in "kube-system" namespace to be "Ready" ...
	E1205 19:23:19.616403 1089722 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-392363-m03" hosting pod "kube-apiserver-ha-392363-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-392363-m03": nodes "ha-392363-m03" not found
	I1205 19:23:19.616413 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-392363" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:19.812720 1089722 request.go:632] Waited for 196.18704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-392363
	I1205 19:23:19.812787 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-392363
	I1205 19:23:19.812796 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:19.812811 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:19.812819 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:19.815031 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:20.012752 1089722 request.go:632] Waited for 197.109442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:23:20.012814 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:23:20.012819 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:20.012827 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:20.012839 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:20.015481 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:20.015956 1089722 pod_ready.go:93] pod "kube-controller-manager-ha-392363" in "kube-system" namespace has status "Ready":"True"
	I1205 19:23:20.015975 1089722 pod_ready.go:82] duration metric: took 399.55102ms for pod "kube-controller-manager-ha-392363" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:20.015988 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:20.212948 1089722 request.go:632] Waited for 196.878675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-392363-m02
	I1205 19:23:20.213021 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-392363-m02
	I1205 19:23:20.213029 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:20.213051 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:20.213076 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:20.215496 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:20.412444 1089722 request.go:632] Waited for 196.267178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:23:20.412516 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:23:20.412523 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:20.412534 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:20.412544 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:20.415217 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:20.415738 1089722 pod_ready.go:93] pod "kube-controller-manager-ha-392363-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:23:20.415757 1089722 pod_ready.go:82] duration metric: took 399.760909ms for pod "kube-controller-manager-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:20.415767 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-392363-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:20.612750 1089722 request.go:632] Waited for 196.878777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-392363-m03
	I1205 19:23:20.612822 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-392363-m03
	I1205 19:23:20.612832 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:20.612844 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:20.612853 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:20.615423 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:20.812569 1089722 request.go:632] Waited for 196.275225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363-m03
	I1205 19:23:20.812625 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m03
	I1205 19:23:20.812632 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:20.812642 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:20.812652 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:20.814355 1089722 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1205 19:23:20.814494 1089722 pod_ready.go:98] node "ha-392363-m03" hosting pod "kube-controller-manager-ha-392363-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-392363-m03": nodes "ha-392363-m03" not found
	I1205 19:23:20.814515 1089722 pod_ready.go:82] duration metric: took 398.73762ms for pod "kube-controller-manager-ha-392363-m03" in "kube-system" namespace to be "Ready" ...
	E1205 19:23:20.814531 1089722 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-392363-m03" hosting pod "kube-controller-manager-ha-392363-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-392363-m03": nodes "ha-392363-m03" not found
	I1205 19:23:20.814544 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fz7rx" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:21.012803 1089722 request.go:632] Waited for 198.163405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fz7rx
	I1205 19:23:21.012874 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fz7rx
	I1205 19:23:21.012890 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:21.012902 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:21.012912 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:21.015591 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:21.212490 1089722 request.go:632] Waited for 196.279556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:23:21.212580 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:23:21.212591 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:21.212603 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:21.212612 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:21.215084 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:21.215551 1089722 pod_ready.go:93] pod "kube-proxy-fz7rx" in "kube-system" namespace has status "Ready":"True"
	I1205 19:23:21.215573 1089722 pod_ready.go:82] duration metric: took 401.015974ms for pod "kube-proxy-fz7rx" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:21.215585 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kpdtp" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:21.412664 1089722 request.go:632] Waited for 197.000779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kpdtp
	I1205 19:23:21.412741 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kpdtp
	I1205 19:23:21.412765 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:21.412776 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:21.412780 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:21.415503 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:21.612510 1089722 request.go:632] Waited for 196.268849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:23:21.612571 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:23:21.612578 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:21.612589 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:21.612594 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:21.615022 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:21.615523 1089722 pod_ready.go:93] pod "kube-proxy-kpdtp" in "kube-system" namespace has status "Ready":"True"
	I1205 19:23:21.615540 1089722 pod_ready.go:82] duration metric: took 399.948425ms for pod "kube-proxy-kpdtp" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:21.615551 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pqjkk" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:21.812472 1089722 request.go:632] Waited for 196.821577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqjkk
	I1205 19:23:21.812543 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqjkk
	I1205 19:23:21.812554 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:21.812562 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:21.812569 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:21.815017 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:22.013008 1089722 request.go:632] Waited for 197.378825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363-m03
	I1205 19:23:22.013073 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m03
	I1205 19:23:22.013078 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:22.013086 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:22.013094 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:22.015745 1089722 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1205 19:23:22.015908 1089722 pod_ready.go:98] node "ha-392363-m03" hosting pod "kube-proxy-pqjkk" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-392363-m03": nodes "ha-392363-m03" not found
	I1205 19:23:22.015931 1089722 pod_ready.go:82] duration metric: took 400.372766ms for pod "kube-proxy-pqjkk" in "kube-system" namespace to be "Ready" ...
	E1205 19:23:22.015950 1089722 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-392363-m03" hosting pod "kube-proxy-pqjkk" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-392363-m03": nodes "ha-392363-m03" not found
	I1205 19:23:22.015966 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wz9hx" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:22.213208 1089722 request.go:632] Waited for 197.11898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz9hx
	I1205 19:23:22.213269 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz9hx
	I1205 19:23:22.213274 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:22.213283 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:22.213287 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:22.215783 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:22.412796 1089722 request.go:632] Waited for 196.340537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:23:22.412853 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:23:22.412858 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:22.412866 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:22.412872 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:22.415235 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:22.415735 1089722 pod_ready.go:93] pod "kube-proxy-wz9hx" in "kube-system" namespace has status "Ready":"True"
	I1205 19:23:22.415753 1089722 pod_ready.go:82] duration metric: took 399.771868ms for pod "kube-proxy-wz9hx" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:22.415762 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-392363" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:22.612807 1089722 request.go:632] Waited for 196.947627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-392363
	I1205 19:23:22.612868 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-392363
	I1205 19:23:22.612873 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:22.612889 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:22.612903 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:22.615294 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:22.813223 1089722 request.go:632] Waited for 197.333993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:23:22.813310 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:23:22.813319 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:22.813328 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:22.813331 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:22.815794 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:22.816313 1089722 pod_ready.go:93] pod "kube-scheduler-ha-392363" in "kube-system" namespace has status "Ready":"True"
	I1205 19:23:22.816335 1089722 pod_ready.go:82] duration metric: took 400.560663ms for pod "kube-scheduler-ha-392363" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:22.816349 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:23.013441 1089722 request.go:632] Waited for 196.98925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-392363-m02
	I1205 19:23:23.013513 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-392363-m02
	I1205 19:23:23.013525 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:23.013536 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:23.013544 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:23.016457 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:23.213427 1089722 request.go:632] Waited for 196.361549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:23:23.213498 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:23:23.213506 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:23.213514 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:23.213519 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:23.216140 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:23.216629 1089722 pod_ready.go:93] pod "kube-scheduler-ha-392363-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:23:23.216651 1089722 pod_ready.go:82] duration metric: took 400.293052ms for pod "kube-scheduler-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:23.216665 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-392363-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:23:23.412716 1089722 request.go:632] Waited for 195.939668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-392363-m03
	I1205 19:23:23.412788 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-392363-m03
	I1205 19:23:23.412799 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:23.412811 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:23.412821 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:23.414984 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:23:23.612872 1089722 request.go:632] Waited for 197.342596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363-m03
	I1205 19:23:23.612956 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m03
	I1205 19:23:23.612963 1089722 round_trippers.go:469] Request Headers:
	I1205 19:23:23.612978 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:23:23.612988 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:23:23.615320 1089722 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1205 19:23:23.615490 1089722 pod_ready.go:98] node "ha-392363-m03" hosting pod "kube-scheduler-ha-392363-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-392363-m03": nodes "ha-392363-m03" not found
	I1205 19:23:23.615512 1089722 pod_ready.go:82] duration metric: took 398.839001ms for pod "kube-scheduler-ha-392363-m03" in "kube-system" namespace to be "Ready" ...
	E1205 19:23:23.615529 1089722 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-392363-m03" hosting pod "kube-scheduler-ha-392363-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-392363-m03": nodes "ha-392363-m03" not found
	I1205 19:23:23.615544 1089722 pod_ready.go:39] duration metric: took 6.405739942s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:23:23.615569 1089722 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:23:23.615644 1089722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:23:23.626457 1089722 api_server.go:72] duration metric: took 27.104316152s to wait for apiserver process to appear ...
	I1205 19:23:23.626479 1089722 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:23:23.626501 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:23.630227 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:23.630249 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:24.126729 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:24.131311 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:24.131344 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:24.626856 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:24.630366 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:24.630388 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:25.127043 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:25.131936 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:25.131961 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:25.627138 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:25.630670 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:25.630696 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:26.127280 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:26.132072 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:26.132094 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:26.626613 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:26.630239 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:26.630270 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:27.127012 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:27.130584 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:27.130606 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:27.627170 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:27.630588 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:27.630612 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:28.127135 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:28.130862 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:28.130887 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:28.627158 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:28.630647 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:28.630673 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:29.127157 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:29.130784 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:29.130809 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:29.627194 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:29.630862 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:29.630899 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:30.126645 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:30.130256 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:30.130296 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:30.626784 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:30.630294 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:30.630317 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:31.126906 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:31.130368 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:31.130391 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:31.626929 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:31.630395 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:31.630418 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:32.127058 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:32.130915 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:32.130952 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:32.627147 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:32.630782 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:32.630810 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:33.127395 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:33.131726 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:33.131749 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:33.627155 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:33.632107 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:33.632130 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:34.126643 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:34.130241 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:34.130263 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:34.626776 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:34.630334 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:34.630360 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:35.127028 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:35.131416 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:35.131438 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:35.626633 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:35.630233 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:35.630262 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:36.126758 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:36.130453 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:36.130477 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:36.627050 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:36.630694 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:36.630724 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:37.127362 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:37.130992 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:37.131028 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:37.627165 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:37.630802 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:37.630834 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:38.127170 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:38.130786 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:38.130815 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:38.627172 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:38.630854 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:38.630884 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:39.127164 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:39.130773 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:39.130799 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:39.627215 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:39.632961 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:39.632990 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:40.126846 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:40.130409 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:40.130444 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:40.627509 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:40.631138 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:40.631161 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:41.126669 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:41.130277 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:41.130306 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:41.626791 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:41.630962 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:41.630992 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:42.127133 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:42.130824 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:42.130851 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:42.627147 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:42.630655 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:42.630688 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:43.127253 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:43.130845 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:43.130874 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:43.627152 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:43.630728 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:43.630757 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:44.127154 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:44.161596 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:44.161621 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:44.627156 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:44.631382 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:44.631408 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:45.127067 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:45.130728 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:45.130751 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:45.627139 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:45.630921 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:45.630955 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:46.127154 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:46.130705 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:46.130729 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:46.626600 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:46.630235 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:46.630263 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:47.127155 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:47.131537 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:47.131560 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:47.627149 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:47.630925 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:47.630956 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:48.127557 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:48.132528 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:48.132552 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:48.627154 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:48.630674 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:48.630698 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:49.127174 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:49.130970 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:49.130997 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:49.627165 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:49.630912 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:49.630936 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:50.127597 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:50.131802 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:50.131826 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:50.627147 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:50.630759 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:50.630783 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:51.127382 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:51.131033 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:51.131074 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:51.627160 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:51.630763 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:51.630808 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:52.127474 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:52.131051 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:52.131077 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:52.626702 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:52.630115 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:52.630141 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:53.126730 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:53.130381 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:53.130404 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:53.627429 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:53.631156 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:53.631183 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:54.126695 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:54.130402 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:54.130431 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:54.626935 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:54.630767 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:54.630804 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:55.127504 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:55.131169 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 19:23:55.131198 1089722 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 19:23:55.626642 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:55.737457 1089722 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": EOF
	I1205 19:23:56.126962 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:23:56.127416 1089722 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I1205 19:23:56.626998 1089722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 19:23:56.627092 1089722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 19:23:56.664178 1089722 cri.go:89] found id: "421197b1aee87b4b7252c96ddce1426e2b6ecabf5b59c4da695748efbd7693e1"
	I1205 19:23:56.664202 1089722 cri.go:89] found id: "66b3f9db553fce330ca8928ac99e609fa75fa7b1a1199b177634aec991806104"
	I1205 19:23:56.664206 1089722 cri.go:89] found id: ""
	I1205 19:23:56.664214 1089722 logs.go:282] 2 containers: [421197b1aee87b4b7252c96ddce1426e2b6ecabf5b59c4da695748efbd7693e1 66b3f9db553fce330ca8928ac99e609fa75fa7b1a1199b177634aec991806104]
	I1205 19:23:56.664264 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:23:56.668203 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:23:56.671566 1089722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 19:23:56.671632 1089722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 19:23:56.710626 1089722 cri.go:89] found id: "d2d9ba0a80cd45d61ef93f48218e06a9eb01080aee34ff1da82d1a8a421a7990"
	I1205 19:23:56.710656 1089722 cri.go:89] found id: "4d9d2c72bf6e0b3a34bdf8b86dce8ed53eca062d64ca9f0441f70f71f008638b"
	I1205 19:23:56.710662 1089722 cri.go:89] found id: ""
	I1205 19:23:56.710672 1089722 logs.go:282] 2 containers: [d2d9ba0a80cd45d61ef93f48218e06a9eb01080aee34ff1da82d1a8a421a7990 4d9d2c72bf6e0b3a34bdf8b86dce8ed53eca062d64ca9f0441f70f71f008638b]
	I1205 19:23:56.710727 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:23:56.714060 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:23:56.717066 1089722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 19:23:56.717119 1089722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 19:23:56.749912 1089722 cri.go:89] found id: ""
	I1205 19:23:56.749933 1089722 logs.go:282] 0 containers: []
	W1205 19:23:56.749942 1089722 logs.go:284] No container was found matching "coredns"
	I1205 19:23:56.749948 1089722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 19:23:56.750021 1089722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 19:23:56.782613 1089722 cri.go:89] found id: "e255bd607ad3d6258ab06303e9eb9adf32db4111d4c6dbc923322a35fae7849c"
	I1205 19:23:56.782632 1089722 cri.go:89] found id: "b3632a8cb1ba8903100dfe47aefabc0fa97286bd759bb36aaf6dc1e1bad58175"
	I1205 19:23:56.782637 1089722 cri.go:89] found id: ""
	I1205 19:23:56.782644 1089722 logs.go:282] 2 containers: [e255bd607ad3d6258ab06303e9eb9adf32db4111d4c6dbc923322a35fae7849c b3632a8cb1ba8903100dfe47aefabc0fa97286bd759bb36aaf6dc1e1bad58175]
	I1205 19:23:56.782685 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:23:56.785815 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:23:56.788893 1089722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 19:23:56.788951 1089722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 19:23:56.821264 1089722 cri.go:89] found id: "56de0466ad7390c15ac7d6048ce3bb61178dae02b4a54001b36986c490e8f29c"
	I1205 19:23:56.821282 1089722 cri.go:89] found id: ""
	I1205 19:23:56.821290 1089722 logs.go:282] 1 containers: [56de0466ad7390c15ac7d6048ce3bb61178dae02b4a54001b36986c490e8f29c]
	I1205 19:23:56.821337 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:23:56.824637 1089722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 19:23:56.824692 1089722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 19:23:56.856823 1089722 cri.go:89] found id: "43756436be668e8cf0603eec47e165e0712eb615c5743b034ae1625d5f5e524f"
	I1205 19:23:56.856847 1089722 cri.go:89] found id: "de89225da1fee11cccbae7e09a2532f4a86f96992fa42893a3a2cb0387258a5a"
	I1205 19:23:56.856853 1089722 cri.go:89] found id: ""
	I1205 19:23:56.856860 1089722 logs.go:282] 2 containers: [43756436be668e8cf0603eec47e165e0712eb615c5743b034ae1625d5f5e524f de89225da1fee11cccbae7e09a2532f4a86f96992fa42893a3a2cb0387258a5a]
	I1205 19:23:56.856898 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:23:56.860512 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:23:56.863530 1089722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 19:23:56.863597 1089722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 19:23:56.902574 1089722 cri.go:89] found id: "57831a3830e172c95d88023e369c1200ab2b3c15d9e451637f9143bbcc8bf455"
	I1205 19:23:56.902600 1089722 cri.go:89] found id: ""
	I1205 19:23:56.902610 1089722 logs.go:282] 1 containers: [57831a3830e172c95d88023e369c1200ab2b3c15d9e451637f9143bbcc8bf455]
	I1205 19:23:56.902666 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:23:56.906964 1089722 logs.go:123] Gathering logs for kindnet [57831a3830e172c95d88023e369c1200ab2b3c15d9e451637f9143bbcc8bf455] ...
	I1205 19:23:56.906987 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57831a3830e172c95d88023e369c1200ab2b3c15d9e451637f9143bbcc8bf455"
	I1205 19:23:56.945215 1089722 logs.go:123] Gathering logs for CRI-O ...
	I1205 19:23:56.945249 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 19:23:57.009537 1089722 logs.go:123] Gathering logs for kube-apiserver [421197b1aee87b4b7252c96ddce1426e2b6ecabf5b59c4da695748efbd7693e1] ...
	I1205 19:23:57.009576 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 421197b1aee87b4b7252c96ddce1426e2b6ecabf5b59c4da695748efbd7693e1"
	I1205 19:23:57.059590 1089722 logs.go:123] Gathering logs for kube-apiserver [66b3f9db553fce330ca8928ac99e609fa75fa7b1a1199b177634aec991806104] ...
	I1205 19:23:57.059639 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66b3f9db553fce330ca8928ac99e609fa75fa7b1a1199b177634aec991806104"
	I1205 19:23:57.102725 1089722 logs.go:123] Gathering logs for etcd [d2d9ba0a80cd45d61ef93f48218e06a9eb01080aee34ff1da82d1a8a421a7990] ...
	I1205 19:23:57.102768 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d9ba0a80cd45d61ef93f48218e06a9eb01080aee34ff1da82d1a8a421a7990"
	I1205 19:23:57.151803 1089722 logs.go:123] Gathering logs for etcd [4d9d2c72bf6e0b3a34bdf8b86dce8ed53eca062d64ca9f0441f70f71f008638b] ...
	I1205 19:23:57.151835 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d9d2c72bf6e0b3a34bdf8b86dce8ed53eca062d64ca9f0441f70f71f008638b"
	I1205 19:23:57.207246 1089722 logs.go:123] Gathering logs for kube-controller-manager [de89225da1fee11cccbae7e09a2532f4a86f96992fa42893a3a2cb0387258a5a] ...
	I1205 19:23:57.207280 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de89225da1fee11cccbae7e09a2532f4a86f96992fa42893a3a2cb0387258a5a"
	I1205 19:23:57.245440 1089722 logs.go:123] Gathering logs for container status ...
	I1205 19:23:57.245516 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 19:23:57.296326 1089722 logs.go:123] Gathering logs for kubelet ...
	I1205 19:23:57.296365 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 19:23:57.357755 1089722 logs.go:123] Gathering logs for describe nodes ...
	I1205 19:23:57.357796 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 19:23:57.589552 1089722 logs.go:123] Gathering logs for kube-scheduler [b3632a8cb1ba8903100dfe47aefabc0fa97286bd759bb36aaf6dc1e1bad58175] ...
	I1205 19:23:57.589583 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3632a8cb1ba8903100dfe47aefabc0fa97286bd759bb36aaf6dc1e1bad58175"
	I1205 19:23:57.630558 1089722 logs.go:123] Gathering logs for kube-proxy [56de0466ad7390c15ac7d6048ce3bb61178dae02b4a54001b36986c490e8f29c] ...
	I1205 19:23:57.630588 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56de0466ad7390c15ac7d6048ce3bb61178dae02b4a54001b36986c490e8f29c"
	I1205 19:23:57.664874 1089722 logs.go:123] Gathering logs for kube-controller-manager [43756436be668e8cf0603eec47e165e0712eb615c5743b034ae1625d5f5e524f] ...
	I1205 19:23:57.664904 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43756436be668e8cf0603eec47e165e0712eb615c5743b034ae1625d5f5e524f"
	I1205 19:23:57.726549 1089722 logs.go:123] Gathering logs for dmesg ...
	I1205 19:23:57.726583 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 19:23:57.744362 1089722 logs.go:123] Gathering logs for kube-scheduler [e255bd607ad3d6258ab06303e9eb9adf32db4111d4c6dbc923322a35fae7849c] ...
	I1205 19:23:57.744392 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e255bd607ad3d6258ab06303e9eb9adf32db4111d4c6dbc923322a35fae7849c"
	I1205 19:24:00.298730 1089722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:24:00.304130 1089722 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1205 19:24:00.304224 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I1205 19:24:00.304238 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:00.304251 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:00.304260 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:00.309569 1089722 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 19:24:00.309699 1089722 api_server.go:141] control plane version: v1.31.2
	I1205 19:24:00.309722 1089722 api_server.go:131] duration metric: took 36.683234826s to wait for apiserver health ...
	I1205 19:24:00.309733 1089722 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:24:00.309763 1089722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 19:24:00.309818 1089722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 19:24:00.345775 1089722 cri.go:89] found id: "421197b1aee87b4b7252c96ddce1426e2b6ecabf5b59c4da695748efbd7693e1"
	I1205 19:24:00.345802 1089722 cri.go:89] found id: "66b3f9db553fce330ca8928ac99e609fa75fa7b1a1199b177634aec991806104"
	I1205 19:24:00.345809 1089722 cri.go:89] found id: ""
	I1205 19:24:00.345818 1089722 logs.go:282] 2 containers: [421197b1aee87b4b7252c96ddce1426e2b6ecabf5b59c4da695748efbd7693e1 66b3f9db553fce330ca8928ac99e609fa75fa7b1a1199b177634aec991806104]
	I1205 19:24:00.345876 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:24:00.349573 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:24:00.352755 1089722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 19:24:00.352820 1089722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 19:24:00.389620 1089722 cri.go:89] found id: "d2d9ba0a80cd45d61ef93f48218e06a9eb01080aee34ff1da82d1a8a421a7990"
	I1205 19:24:00.389645 1089722 cri.go:89] found id: "4d9d2c72bf6e0b3a34bdf8b86dce8ed53eca062d64ca9f0441f70f71f008638b"
	I1205 19:24:00.389649 1089722 cri.go:89] found id: ""
	I1205 19:24:00.389657 1089722 logs.go:282] 2 containers: [d2d9ba0a80cd45d61ef93f48218e06a9eb01080aee34ff1da82d1a8a421a7990 4d9d2c72bf6e0b3a34bdf8b86dce8ed53eca062d64ca9f0441f70f71f008638b]
	I1205 19:24:00.389699 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:24:00.393100 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:24:00.396461 1089722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 19:24:00.396510 1089722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 19:24:00.432068 1089722 cri.go:89] found id: ""
	I1205 19:24:00.432093 1089722 logs.go:282] 0 containers: []
	W1205 19:24:00.432102 1089722 logs.go:284] No container was found matching "coredns"
	I1205 19:24:00.432116 1089722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 19:24:00.432160 1089722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 19:24:00.467332 1089722 cri.go:89] found id: "e255bd607ad3d6258ab06303e9eb9adf32db4111d4c6dbc923322a35fae7849c"
	I1205 19:24:00.467355 1089722 cri.go:89] found id: "b3632a8cb1ba8903100dfe47aefabc0fa97286bd759bb36aaf6dc1e1bad58175"
	I1205 19:24:00.467359 1089722 cri.go:89] found id: ""
	I1205 19:24:00.467366 1089722 logs.go:282] 2 containers: [e255bd607ad3d6258ab06303e9eb9adf32db4111d4c6dbc923322a35fae7849c b3632a8cb1ba8903100dfe47aefabc0fa97286bd759bb36aaf6dc1e1bad58175]
	I1205 19:24:00.467409 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:24:00.470850 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:24:00.473979 1089722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 19:24:00.474063 1089722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 19:24:00.507102 1089722 cri.go:89] found id: "56de0466ad7390c15ac7d6048ce3bb61178dae02b4a54001b36986c490e8f29c"
	I1205 19:24:00.507126 1089722 cri.go:89] found id: ""
	I1205 19:24:00.507134 1089722 logs.go:282] 1 containers: [56de0466ad7390c15ac7d6048ce3bb61178dae02b4a54001b36986c490e8f29c]
	I1205 19:24:00.507181 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:24:00.510229 1089722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 19:24:00.510286 1089722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 19:24:00.542443 1089722 cri.go:89] found id: "43756436be668e8cf0603eec47e165e0712eb615c5743b034ae1625d5f5e524f"
	I1205 19:24:00.542468 1089722 cri.go:89] found id: "de89225da1fee11cccbae7e09a2532f4a86f96992fa42893a3a2cb0387258a5a"
	I1205 19:24:00.542475 1089722 cri.go:89] found id: ""
	I1205 19:24:00.542483 1089722 logs.go:282] 2 containers: [43756436be668e8cf0603eec47e165e0712eb615c5743b034ae1625d5f5e524f de89225da1fee11cccbae7e09a2532f4a86f96992fa42893a3a2cb0387258a5a]
	I1205 19:24:00.542539 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:24:00.545838 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:24:00.548922 1089722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 19:24:00.548983 1089722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 19:24:00.585018 1089722 cri.go:89] found id: "57831a3830e172c95d88023e369c1200ab2b3c15d9e451637f9143bbcc8bf455"
	I1205 19:24:00.585042 1089722 cri.go:89] found id: ""
	I1205 19:24:00.585049 1089722 logs.go:282] 1 containers: [57831a3830e172c95d88023e369c1200ab2b3c15d9e451637f9143bbcc8bf455]
	I1205 19:24:00.585094 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:24:00.588349 1089722 logs.go:123] Gathering logs for kubelet ...
	I1205 19:24:00.588371 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 19:24:00.641594 1089722 logs.go:123] Gathering logs for dmesg ...
	I1205 19:24:00.641629 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 19:24:00.658937 1089722 logs.go:123] Gathering logs for kube-controller-manager [43756436be668e8cf0603eec47e165e0712eb615c5743b034ae1625d5f5e524f] ...
	I1205 19:24:00.658960 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43756436be668e8cf0603eec47e165e0712eb615c5743b034ae1625d5f5e524f"
	I1205 19:24:00.705331 1089722 logs.go:123] Gathering logs for CRI-O ...
	I1205 19:24:00.705363 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 19:24:00.764495 1089722 logs.go:123] Gathering logs for etcd [d2d9ba0a80cd45d61ef93f48218e06a9eb01080aee34ff1da82d1a8a421a7990] ...
	I1205 19:24:00.764525 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d9ba0a80cd45d61ef93f48218e06a9eb01080aee34ff1da82d1a8a421a7990"
	I1205 19:24:00.807695 1089722 logs.go:123] Gathering logs for kube-scheduler [b3632a8cb1ba8903100dfe47aefabc0fa97286bd759bb36aaf6dc1e1bad58175] ...
	I1205 19:24:00.807730 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3632a8cb1ba8903100dfe47aefabc0fa97286bd759bb36aaf6dc1e1bad58175"
	I1205 19:24:00.842290 1089722 logs.go:123] Gathering logs for kindnet [57831a3830e172c95d88023e369c1200ab2b3c15d9e451637f9143bbcc8bf455] ...
	I1205 19:24:00.842320 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57831a3830e172c95d88023e369c1200ab2b3c15d9e451637f9143bbcc8bf455"
	I1205 19:24:00.874197 1089722 logs.go:123] Gathering logs for kube-apiserver [66b3f9db553fce330ca8928ac99e609fa75fa7b1a1199b177634aec991806104] ...
	I1205 19:24:00.874224 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66b3f9db553fce330ca8928ac99e609fa75fa7b1a1199b177634aec991806104"
	I1205 19:24:00.907654 1089722 logs.go:123] Gathering logs for etcd [4d9d2c72bf6e0b3a34bdf8b86dce8ed53eca062d64ca9f0441f70f71f008638b] ...
	I1205 19:24:00.907681 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d9d2c72bf6e0b3a34bdf8b86dce8ed53eca062d64ca9f0441f70f71f008638b"
	I1205 19:24:00.953210 1089722 logs.go:123] Gathering logs for kube-proxy [56de0466ad7390c15ac7d6048ce3bb61178dae02b4a54001b36986c490e8f29c] ...
	I1205 19:24:00.953240 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56de0466ad7390c15ac7d6048ce3bb61178dae02b4a54001b36986c490e8f29c"
	I1205 19:24:00.985887 1089722 logs.go:123] Gathering logs for container status ...
	I1205 19:24:00.985917 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 19:24:01.023433 1089722 logs.go:123] Gathering logs for describe nodes ...
	I1205 19:24:01.023464 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 19:24:01.196205 1089722 logs.go:123] Gathering logs for kube-apiserver [421197b1aee87b4b7252c96ddce1426e2b6ecabf5b59c4da695748efbd7693e1] ...
	I1205 19:24:01.196235 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 421197b1aee87b4b7252c96ddce1426e2b6ecabf5b59c4da695748efbd7693e1"
	I1205 19:24:01.235126 1089722 logs.go:123] Gathering logs for kube-scheduler [e255bd607ad3d6258ab06303e9eb9adf32db4111d4c6dbc923322a35fae7849c] ...
	I1205 19:24:01.235159 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e255bd607ad3d6258ab06303e9eb9adf32db4111d4c6dbc923322a35fae7849c"
	I1205 19:24:01.293519 1089722 logs.go:123] Gathering logs for kube-controller-manager [de89225da1fee11cccbae7e09a2532f4a86f96992fa42893a3a2cb0387258a5a] ...
	I1205 19:24:01.293566 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de89225da1fee11cccbae7e09a2532f4a86f96992fa42893a3a2cb0387258a5a"
	I1205 19:24:03.838156 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1205 19:24:03.838176 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:03.838185 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:03.838188 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:03.844249 1089722 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:24:03.850820 1089722 system_pods.go:59] 19 kube-system pods found
	I1205 19:24:03.850872 1089722 system_pods.go:61] "coredns-7c65d6cfc9-2n94f" [52934535-ed7d-42bd-a684-f8f03cb6c8fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 19:24:03.850885 1089722 system_pods.go:61] "coredns-7c65d6cfc9-4wfjm" [3b4a5826-994a-43f5-b649-9cf605144aaf] Running
	I1205 19:24:03.850895 1089722 system_pods.go:61] "etcd-ha-392363" [d5d270e3-72f2-46a8-834c-38be3fedb9dd] Running
	I1205 19:24:03.850902 1089722 system_pods.go:61] "etcd-ha-392363-m02" [8fc37a40-5002-4184-8fca-2b4b05b8e68f] Running
	I1205 19:24:03.850911 1089722 system_pods.go:61] "kindnet-4kzwv" [096cc73a-a672-46d7-a527-bca392370c21] Running
	I1205 19:24:03.850916 1089722 system_pods.go:61] "kindnet-w8jfq" [e309ae7d-df02-4c1c-8792-6523047deb9b] Running
	I1205 19:24:03.850928 1089722 system_pods.go:61] "kindnet-xp8pn" [128e3a05-3728-44f7-873b-aba2e96f0733] Running
	I1205 19:24:03.850940 1089722 system_pods.go:61] "kube-apiserver-ha-392363" [f22c0a5e-ec06-4662-98d1-4cc438ae59a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 19:24:03.850950 1089722 system_pods.go:61] "kube-apiserver-ha-392363-m02" [41b9d363-ab75-4d19-a3c7-f1282af1d9da] Running
	I1205 19:24:03.850962 1089722 system_pods.go:61] "kube-controller-manager-ha-392363" [dbec8d2c-fb45-457c-887c-16ad6a8e7401] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 19:24:03.850971 1089722 system_pods.go:61] "kube-controller-manager-ha-392363-m02" [9462c9f2-1d66-4ffa-bc1d-770a7b000c3e] Running
	I1205 19:24:03.850982 1089722 system_pods.go:61] "kube-proxy-fz7rx" [4c2ecfa1-a0fc-4b27-aeef-04619c9ac0cb] Running
	I1205 19:24:03.850990 1089722 system_pods.go:61] "kube-proxy-kpdtp" [60c89a3d-0b2e-4bee-92f1-fd5a224c6274] Running
	I1205 19:24:03.850996 1089722 system_pods.go:61] "kube-proxy-wz9hx" [c1752a51-7481-48cf-b401-59a52b2446a7] Running
	I1205 19:24:03.851004 1089722 system_pods.go:61] "kube-scheduler-ha-392363" [ea5df5fd-1d06-446b-8ad7-0aedf17ae7c2] Running
	I1205 19:24:03.851010 1089722 system_pods.go:61] "kube-scheduler-ha-392363-m02" [290d53e9-fb20-4e0b-8a53-7f6dfa47b484] Running
	I1205 19:24:03.851018 1089722 system_pods.go:61] "kube-vip-ha-392363" [8a59b735-5355-42e5-ad6c-4288f1f4b140] Running
	I1205 19:24:03.851024 1089722 system_pods.go:61] "kube-vip-ha-392363-m02" [e29aea1d-2cbf-423f-ba40-141f3de152ff] Running
	I1205 19:24:03.851033 1089722 system_pods.go:61] "storage-provisioner" [fe0ddd9a-5068-425c-9050-a8a784d959ec] Running
	I1205 19:24:03.851042 1089722 system_pods.go:74] duration metric: took 3.541299294s to wait for pod list to return data ...
	I1205 19:24:03.851056 1089722 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:24:03.851146 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:24:03.851155 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:03.851166 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:03.851171 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:03.853750 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:03.854016 1089722 default_sa.go:45] found service account: "default"
	I1205 19:24:03.854034 1089722 default_sa.go:55] duration metric: took 2.967635ms for default service account to be created ...
	I1205 19:24:03.854043 1089722 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:24:03.854113 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1205 19:24:03.854124 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:03.854134 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:03.854143 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:03.857612 1089722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:24:03.864222 1089722 system_pods.go:86] 19 kube-system pods found
	I1205 19:24:03.864250 1089722 system_pods.go:89] "coredns-7c65d6cfc9-2n94f" [52934535-ed7d-42bd-a684-f8f03cb6c8fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 19:24:03.864258 1089722 system_pods.go:89] "coredns-7c65d6cfc9-4wfjm" [3b4a5826-994a-43f5-b649-9cf605144aaf] Running
	I1205 19:24:03.864275 1089722 system_pods.go:89] "etcd-ha-392363" [d5d270e3-72f2-46a8-834c-38be3fedb9dd] Running
	I1205 19:24:03.864286 1089722 system_pods.go:89] "etcd-ha-392363-m02" [8fc37a40-5002-4184-8fca-2b4b05b8e68f] Running
	I1205 19:24:03.864293 1089722 system_pods.go:89] "kindnet-4kzwv" [096cc73a-a672-46d7-a527-bca392370c21] Running
	I1205 19:24:03.864299 1089722 system_pods.go:89] "kindnet-w8jfq" [e309ae7d-df02-4c1c-8792-6523047deb9b] Running
	I1205 19:24:03.864306 1089722 system_pods.go:89] "kindnet-xp8pn" [128e3a05-3728-44f7-873b-aba2e96f0733] Running
	I1205 19:24:03.864319 1089722 system_pods.go:89] "kube-apiserver-ha-392363" [f22c0a5e-ec06-4662-98d1-4cc438ae59a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 19:24:03.864329 1089722 system_pods.go:89] "kube-apiserver-ha-392363-m02" [41b9d363-ab75-4d19-a3c7-f1282af1d9da] Running
	I1205 19:24:03.864345 1089722 system_pods.go:89] "kube-controller-manager-ha-392363" [dbec8d2c-fb45-457c-887c-16ad6a8e7401] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 19:24:03.864355 1089722 system_pods.go:89] "kube-controller-manager-ha-392363-m02" [9462c9f2-1d66-4ffa-bc1d-770a7b000c3e] Running
	I1205 19:24:03.864363 1089722 system_pods.go:89] "kube-proxy-fz7rx" [4c2ecfa1-a0fc-4b27-aeef-04619c9ac0cb] Running
	I1205 19:24:03.864369 1089722 system_pods.go:89] "kube-proxy-kpdtp" [60c89a3d-0b2e-4bee-92f1-fd5a224c6274] Running
	I1205 19:24:03.864376 1089722 system_pods.go:89] "kube-proxy-wz9hx" [c1752a51-7481-48cf-b401-59a52b2446a7] Running
	I1205 19:24:03.864385 1089722 system_pods.go:89] "kube-scheduler-ha-392363" [ea5df5fd-1d06-446b-8ad7-0aedf17ae7c2] Running
	I1205 19:24:03.864391 1089722 system_pods.go:89] "kube-scheduler-ha-392363-m02" [290d53e9-fb20-4e0b-8a53-7f6dfa47b484] Running
	I1205 19:24:03.864400 1089722 system_pods.go:89] "kube-vip-ha-392363" [8a59b735-5355-42e5-ad6c-4288f1f4b140] Running
	I1205 19:24:03.864406 1089722 system_pods.go:89] "kube-vip-ha-392363-m02" [e29aea1d-2cbf-423f-ba40-141f3de152ff] Running
	I1205 19:24:03.864414 1089722 system_pods.go:89] "storage-provisioner" [fe0ddd9a-5068-425c-9050-a8a784d959ec] Running
	I1205 19:24:03.864423 1089722 system_pods.go:126] duration metric: took 10.372801ms to wait for k8s-apps to be running ...
	I1205 19:24:03.864434 1089722 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:24:03.864486 1089722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:24:03.877496 1089722 system_svc.go:56] duration metric: took 13.053539ms WaitForService to wait for kubelet
	I1205 19:24:03.877526 1089722 kubeadm.go:582] duration metric: took 1m7.355388185s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:24:03.877549 1089722 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:24:03.877643 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I1205 19:24:03.877656 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:03.877667 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:03.877673 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:03.880952 1089722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:24:03.882047 1089722 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 19:24:03.882075 1089722 node_conditions.go:123] node cpu capacity is 8
	I1205 19:24:03.882095 1089722 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 19:24:03.882104 1089722 node_conditions.go:123] node cpu capacity is 8
	I1205 19:24:03.882110 1089722 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 19:24:03.882118 1089722 node_conditions.go:123] node cpu capacity is 8
	I1205 19:24:03.882124 1089722 node_conditions.go:105] duration metric: took 4.565226ms to run NodePressure ...
	I1205 19:24:03.882139 1089722 start.go:241] waiting for startup goroutines ...
	I1205 19:24:03.882163 1089722 start.go:255] writing updated cluster config ...
	I1205 19:24:03.884104 1089722 out.go:201] 
	I1205 19:24:03.885528 1089722 config.go:182] Loaded profile config "ha-392363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:24:03.885614 1089722 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/config.json ...
	I1205 19:24:03.887122 1089722 out.go:177] * Starting "ha-392363-m04" worker node in "ha-392363" cluster
	I1205 19:24:03.888325 1089722 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:24:03.889394 1089722 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1205 19:24:03.890417 1089722 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:24:03.890437 1089722 cache.go:56] Caching tarball of preloaded images
	I1205 19:24:03.890492 1089722 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1205 19:24:03.890547 1089722 preload.go:172] Found /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:24:03.890571 1089722 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:24:03.890692 1089722 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/config.json ...
	I1205 19:24:03.911277 1089722 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
	I1205 19:24:03.911297 1089722 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
	I1205 19:24:03.911313 1089722 cache.go:194] Successfully downloaded all kic artifacts
	I1205 19:24:03.911346 1089722 start.go:360] acquireMachinesLock for ha-392363-m04: {Name:mk09eb74de01acd44d990e385085f2e3f54f8d0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:24:03.911404 1089722 start.go:364] duration metric: took 42.283µs to acquireMachinesLock for "ha-392363-m04"
	I1205 19:24:03.911422 1089722 start.go:96] Skipping create...Using existing machine configuration
	I1205 19:24:03.911428 1089722 fix.go:54] fixHost starting: m04
	I1205 19:24:03.911627 1089722 cli_runner.go:164] Run: docker container inspect ha-392363-m04 --format={{.State.Status}}
	I1205 19:24:03.928694 1089722 fix.go:112] recreateIfNeeded on ha-392363-m04: state=Stopped err=<nil>
	W1205 19:24:03.928728 1089722 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 19:24:03.930418 1089722 out.go:177] * Restarting existing docker container for "ha-392363-m04" ...
	I1205 19:24:03.931508 1089722 cli_runner.go:164] Run: docker start ha-392363-m04
	I1205 19:24:04.206241 1089722 cli_runner.go:164] Run: docker container inspect ha-392363-m04 --format={{.State.Status}}
	I1205 19:24:04.222945 1089722 kic.go:430] container "ha-392363-m04" state is running.
	I1205 19:24:04.223360 1089722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-392363-m04
	I1205 19:24:04.241373 1089722 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/config.json ...
	I1205 19:24:04.241649 1089722 machine.go:93] provisionDockerMachine start ...
	I1205 19:24:04.241705 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m04
	I1205 19:24:04.259248 1089722 main.go:141] libmachine: Using SSH client type: native
	I1205 19:24:04.259474 1089722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1205 19:24:04.259490 1089722 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 19:24:04.260237 1089722 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50442->127.0.0.1:32838: read: connection reset by peer
	I1205 19:24:07.390074 1089722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-392363-m04
	
	I1205 19:24:07.390107 1089722 ubuntu.go:169] provisioning hostname "ha-392363-m04"
	I1205 19:24:07.390160 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m04
	I1205 19:24:07.408209 1089722 main.go:141] libmachine: Using SSH client type: native
	I1205 19:24:07.408393 1089722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1205 19:24:07.408407 1089722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-392363-m04 && echo "ha-392363-m04" | sudo tee /etc/hostname
	I1205 19:24:07.548340 1089722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-392363-m04
	
	I1205 19:24:07.548429 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m04
	I1205 19:24:07.566098 1089722 main.go:141] libmachine: Using SSH client type: native
	I1205 19:24:07.566299 1089722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1205 19:24:07.566316 1089722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-392363-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-392363-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-392363-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:24:07.694127 1089722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:24:07.694165 1089722 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20052-999445/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-999445/.minikube}
	I1205 19:24:07.694187 1089722 ubuntu.go:177] setting up certificates
	I1205 19:24:07.694202 1089722 provision.go:84] configureAuth start
	I1205 19:24:07.694291 1089722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-392363-m04
	I1205 19:24:07.710911 1089722 provision.go:143] copyHostCerts
	I1205 19:24:07.710956 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-999445/.minikube/cert.pem
	I1205 19:24:07.710985 1089722 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-999445/.minikube/cert.pem, removing ...
	I1205 19:24:07.710995 1089722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-999445/.minikube/cert.pem
	I1205 19:24:07.711074 1089722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-999445/.minikube/cert.pem (1123 bytes)
	I1205 19:24:07.711181 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-999445/.minikube/key.pem
	I1205 19:24:07.711209 1089722 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-999445/.minikube/key.pem, removing ...
	I1205 19:24:07.711217 1089722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-999445/.minikube/key.pem
	I1205 19:24:07.711252 1089722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-999445/.minikube/key.pem (1675 bytes)
	I1205 19:24:07.711317 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-999445/.minikube/ca.pem
	I1205 19:24:07.711342 1089722 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-999445/.minikube/ca.pem, removing ...
	I1205 19:24:07.711350 1089722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.pem
	I1205 19:24:07.711387 1089722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-999445/.minikube/ca.pem (1082 bytes)
	I1205 19:24:07.712035 1089722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca-key.pem org=jenkins.ha-392363-m04 san=[127.0.0.1 192.168.49.5 ha-392363-m04 localhost minikube]
	I1205 19:24:07.840040 1089722 provision.go:177] copyRemoteCerts
	I1205 19:24:07.840099 1089722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:24:07.840142 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m04
	I1205 19:24:07.857661 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363-m04/id_rsa Username:docker}
	I1205 19:24:07.950581 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:24:07.950670 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 19:24:07.972167 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:24:07.972246 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 19:24:07.993973 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:24:07.994045 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 19:24:08.016842 1089722 provision.go:87] duration metric: took 322.616859ms to configureAuth
	I1205 19:24:08.016874 1089722 ubuntu.go:193] setting minikube options for container-runtime
	I1205 19:24:08.017093 1089722 config.go:182] Loaded profile config "ha-392363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:24:08.017223 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m04
	I1205 19:24:08.034989 1089722 main.go:141] libmachine: Using SSH client type: native
	I1205 19:24:08.035217 1089722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1205 19:24:08.035242 1089722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:24:08.257508 1089722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:24:08.257552 1089722 machine.go:96] duration metric: took 4.015887529s to provisionDockerMachine
	I1205 19:24:08.257568 1089722 start.go:293] postStartSetup for "ha-392363-m04" (driver="docker")
	I1205 19:24:08.257584 1089722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:24:08.257656 1089722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:24:08.257708 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m04
	I1205 19:24:08.274683 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363-m04/id_rsa Username:docker}
	I1205 19:24:08.366659 1089722 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:24:08.369625 1089722 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 19:24:08.369656 1089722 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 19:24:08.369665 1089722 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 19:24:08.369672 1089722 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1205 19:24:08.369681 1089722 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-999445/.minikube/addons for local assets ...
	I1205 19:24:08.369744 1089722 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-999445/.minikube/files for local assets ...
	I1205 19:24:08.369811 1089722 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem -> 10063152.pem in /etc/ssl/certs
	I1205 19:24:08.369820 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem -> /etc/ssl/certs/10063152.pem
	I1205 19:24:08.369901 1089722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:24:08.377552 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem --> /etc/ssl/certs/10063152.pem (1708 bytes)
	I1205 19:24:08.400385 1089722 start.go:296] duration metric: took 142.799702ms for postStartSetup
	I1205 19:24:08.400469 1089722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:24:08.400526 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m04
	I1205 19:24:08.418048 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363-m04/id_rsa Username:docker}
	I1205 19:24:08.506500 1089722 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 19:24:08.510559 1089722 fix.go:56] duration metric: took 4.599123513s for fixHost
	I1205 19:24:08.510587 1089722 start.go:83] releasing machines lock for "ha-392363-m04", held for 4.59917098s
	I1205 19:24:08.510649 1089722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-392363-m04
	I1205 19:24:08.528806 1089722 out.go:177] * Found network options:
	I1205 19:24:08.530112 1089722 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1205 19:24:08.531290 1089722 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 19:24:08.531313 1089722 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 19:24:08.531336 1089722 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 19:24:08.531353 1089722 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:24:08.531444 1089722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:24:08.531499 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m04
	I1205 19:24:08.531521 1089722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:24:08.531586 1089722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m04
	I1205 19:24:08.548996 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363-m04/id_rsa Username:docker}
	I1205 19:24:08.550164 1089722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363-m04/id_rsa Username:docker}
	I1205 19:24:08.772589 1089722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 19:24:08.776937 1089722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:24:08.785097 1089722 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 19:24:08.785181 1089722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:24:08.792856 1089722 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 19:24:08.792887 1089722 start.go:495] detecting cgroup driver to use...
	I1205 19:24:08.792917 1089722 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 19:24:08.792956 1089722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:24:08.804003 1089722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:24:08.813931 1089722 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:24:08.813984 1089722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:24:08.825038 1089722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:24:08.834526 1089722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:24:08.913497 1089722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:24:08.987808 1089722 docker.go:233] disabling docker service ...
	I1205 19:24:08.987887 1089722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:24:08.999318 1089722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:24:09.009241 1089722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:24:09.090851 1089722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:24:09.166782 1089722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:24:09.177071 1089722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:24:09.191944 1089722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:24:09.192015 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:24:09.200858 1089722 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:24:09.200909 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:24:09.209316 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:24:09.217550 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:24:09.226045 1089722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:24:09.233912 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:24:09.242309 1089722 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:24:09.250676 1089722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:24:09.259102 1089722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:24:09.266152 1089722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:24:09.273192 1089722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:24:09.352682 1089722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:24:09.466983 1089722 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:24:09.467049 1089722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:24:09.470342 1089722 start.go:563] Will wait 60s for crictl version
	I1205 19:24:09.470397 1089722 ssh_runner.go:195] Run: which crictl
	I1205 19:24:09.473500 1089722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:24:09.507118 1089722 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 19:24:09.507192 1089722 ssh_runner.go:195] Run: crio --version
	I1205 19:24:09.539481 1089722 ssh_runner.go:195] Run: crio --version
	I1205 19:24:09.574990 1089722 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1205 19:24:09.576464 1089722 out.go:177]   - env NO_PROXY=192.168.49.2
	I1205 19:24:09.577709 1089722 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1205 19:24:09.578975 1089722 cli_runner.go:164] Run: docker network inspect ha-392363 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:24:09.596176 1089722 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 19:24:09.599809 1089722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:24:09.610422 1089722 mustload.go:65] Loading cluster: ha-392363
	I1205 19:24:09.610643 1089722 config.go:182] Loaded profile config "ha-392363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:24:09.610847 1089722 cli_runner.go:164] Run: docker container inspect ha-392363 --format={{.State.Status}}
	I1205 19:24:09.627859 1089722 host.go:66] Checking if "ha-392363" exists ...
	I1205 19:24:09.628091 1089722 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363 for IP: 192.168.49.5
	I1205 19:24:09.628103 1089722 certs.go:194] generating shared ca certs ...
	I1205 19:24:09.628124 1089722 certs.go:226] acquiring lock for ca certs: {Name:mk27706fe4627f850c07ffcdfc76cdd3f60bd8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:24:09.628237 1089722 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key
	I1205 19:24:09.628276 1089722 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key
	I1205 19:24:09.628288 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:24:09.628302 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:24:09.628314 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:24:09.628332 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:24:09.628383 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/1006315.pem (1338 bytes)
	W1205 19:24:09.628412 1089722 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-999445/.minikube/certs/1006315_empty.pem, impossibly tiny 0 bytes
	I1205 19:24:09.628422 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 19:24:09.628445 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/ca.pem (1082 bytes)
	I1205 19:24:09.628469 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:24:09.628492 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/key.pem (1675 bytes)
	I1205 19:24:09.628535 1089722 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem (1708 bytes)
	I1205 19:24:09.628562 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:24:09.628575 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/certs/1006315.pem -> /usr/share/ca-certificates/1006315.pem
	I1205 19:24:09.628590 1089722 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem -> /usr/share/ca-certificates/10063152.pem
	I1205 19:24:09.628612 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:24:09.651308 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:24:09.673159 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:24:09.694151 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:24:09.714857 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:24:09.735426 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/certs/1006315.pem --> /usr/share/ca-certificates/1006315.pem (1338 bytes)
	I1205 19:24:09.757266 1089722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/ssl/certs/10063152.pem --> /usr/share/ca-certificates/10063152.pem (1708 bytes)
	I1205 19:24:09.778838 1089722 ssh_runner.go:195] Run: openssl version
	I1205 19:24:09.783603 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10063152.pem && ln -fs /usr/share/ca-certificates/10063152.pem /etc/ssl/certs/10063152.pem"
	I1205 19:24:09.792022 1089722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10063152.pem
	I1205 19:24:09.795002 1089722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:12 /usr/share/ca-certificates/10063152.pem
	I1205 19:24:09.795039 1089722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10063152.pem
	I1205 19:24:09.801702 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10063152.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:24:09.809554 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:24:09.817599 1089722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:24:09.820700 1089722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:24:09.820752 1089722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:24:09.826673 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:24:09.834392 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1006315.pem && ln -fs /usr/share/ca-certificates/1006315.pem /etc/ssl/certs/1006315.pem"
	I1205 19:24:09.842624 1089722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1006315.pem
	I1205 19:24:09.845605 1089722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:12 /usr/share/ca-certificates/1006315.pem
	I1205 19:24:09.845658 1089722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1006315.pem
	I1205 19:24:09.851923 1089722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1006315.pem /etc/ssl/certs/51391683.0"
	I1205 19:24:09.859481 1089722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:24:09.862522 1089722 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:24:09.862566 1089722 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.2  false true} ...
	I1205 19:24:09.862643 1089722 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-392363-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-392363 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:24:09.862684 1089722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:24:09.869725 1089722 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:24:09.869773 1089722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1205 19:24:09.877627 1089722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1205 19:24:09.893770 1089722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:24:09.909500 1089722 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:24:09.912802 1089722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:24:09.922234 1089722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:24:09.999657 1089722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:24:10.010283 1089722 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1205 19:24:10.010590 1089722 config.go:182] Loaded profile config "ha-392363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:24:10.011686 1089722 out.go:177] * Verifying Kubernetes components...
	I1205 19:24:10.012904 1089722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:24:10.086300 1089722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:24:10.097286 1089722 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20052-999445/kubeconfig
	I1205 19:24:10.097505 1089722 kapi.go:59] client config for ha-392363: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-999445/.minikube/profiles/ha-392363/client.key", CAFile:"/home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 19:24:10.097564 1089722 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1205 19:24:10.097763 1089722 node_ready.go:35] waiting up to 6m0s for node "ha-392363-m04" to be "Ready" ...
	I1205 19:24:10.097833 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:10.097841 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:10.097848 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:10.097851 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:10.100638 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:10.598601 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:10.598625 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:10.598637 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:10.598644 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:10.601244 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:11.098146 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:11.098173 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:11.098186 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:11.098191 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:11.100960 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:11.598948 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:11.598971 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:11.598982 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:11.598987 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:11.601593 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:12.098551 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:12.098570 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:12.098579 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:12.098585 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:12.100980 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:12.101459 1089722 node_ready.go:53] node "ha-392363-m04" has status "Ready":"Unknown"
	I1205 19:24:12.598791 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:12.598811 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:12.598820 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:12.598824 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:12.601261 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:13.098131 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:13.098152 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:13.098160 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:13.098170 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:13.100719 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:13.598717 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:13.598742 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:13.598753 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:13.598761 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:13.601441 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:14.098261 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:14.098282 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:14.098291 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:14.098296 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:14.100928 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:14.101494 1089722 node_ready.go:53] node "ha-392363-m04" has status "Ready":"Unknown"
	I1205 19:24:14.598924 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:14.598946 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:14.598957 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:14.598963 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:14.601405 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:15.098016 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:15.098039 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:15.098048 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:15.098051 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:15.100781 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:15.598602 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:15.598624 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:15.598634 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:15.598640 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:15.601216 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:16.098957 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:16.098984 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:16.098996 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:16.099002 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:16.101521 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:16.102000 1089722 node_ready.go:53] node "ha-392363-m04" has status "Ready":"Unknown"
	I1205 19:24:16.598190 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:16.598214 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:16.598225 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:16.598232 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:16.600887 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:17.098921 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:17.098947 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:17.098958 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:17.098965 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:17.101507 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:17.598177 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:17.598199 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:17.598211 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:17.598217 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:17.600876 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:17.601509 1089722 node_ready.go:49] node "ha-392363-m04" has status "Ready":"True"
	I1205 19:24:17.601530 1089722 node_ready.go:38] duration metric: took 7.503755144s for node "ha-392363-m04" to be "Ready" ...
	I1205 19:24:17.601539 1089722 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:24:17.601624 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1205 19:24:17.601638 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:17.601647 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:17.601659 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:17.605724 1089722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:24:17.611132 1089722 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:17.611239 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:17.611255 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:17.611276 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:17.611280 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:17.613787 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:17.614472 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:17.614490 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:17.614500 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:17.614508 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:17.616363 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:18.112274 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:18.112297 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:18.112308 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:18.112314 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:18.115254 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:18.115922 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:18.115942 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:18.115953 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:18.115959 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:18.118430 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:18.612418 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:18.612450 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:18.612462 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:18.612469 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:18.615491 1089722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:24:18.616309 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:18.616326 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:18.616334 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:18.616342 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:18.618473 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:19.111996 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:19.112025 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:19.112038 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:19.112043 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:19.114257 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:19.114976 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:19.114995 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:19.115003 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:19.115009 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:19.116893 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:19.611362 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:19.611387 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:19.611399 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:19.611406 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:19.614633 1089722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:24:19.615391 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:19.615409 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:19.615420 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:19.615426 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:19.617728 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:19.618267 1089722 pod_ready.go:103] pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace has status "Ready":"False"
	I1205 19:24:20.111928 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:20.111949 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:20.111957 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:20.111962 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:20.114784 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:20.115437 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:20.115454 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:20.115460 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:20.115464 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:20.117629 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:20.611471 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:20.611493 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:20.611505 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:20.611510 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:20.614248 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:20.614912 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:20.614932 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:20.614940 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:20.614945 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:20.617112 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:21.112034 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:21.112058 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:21.112070 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:21.112077 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:21.114795 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:21.115407 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:21.115422 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:21.115430 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:21.115434 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:21.117575 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:21.611374 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:21.611399 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:21.611408 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:21.611414 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:21.614489 1089722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:24:21.615258 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:21.615275 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:21.615286 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:21.615296 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:21.617492 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:22.112325 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:22.112346 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:22.112355 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:22.112362 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:22.114813 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:22.115505 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:22.115522 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:22.115530 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:22.115533 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:22.117602 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:22.118011 1089722 pod_ready.go:103] pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace has status "Ready":"False"
	I1205 19:24:22.611399 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:22.611420 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:22.611429 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:22.611432 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:22.614368 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:22.615015 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:22.615030 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:22.615038 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:22.615045 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:22.617119 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:23.111955 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:23.111977 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:23.111990 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:23.111995 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:23.114694 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:23.115325 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:23.115341 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:23.115348 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:23.115352 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:23.117421 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:23.612340 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:23.612369 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:23.612380 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:23.612387 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:23.615205 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:23.615817 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:23.615835 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:23.615843 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:23.615846 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:23.618058 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:24.112068 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:24.112093 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:24.112106 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:24.112119 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:24.115177 1089722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:24:24.115949 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:24.115969 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:24.115980 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:24.115988 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:24.118635 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:24.119154 1089722 pod_ready.go:103] pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace has status "Ready":"False"
	I1205 19:24:24.611469 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:24.611495 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:24.611507 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:24.611514 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:24.614544 1089722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:24:24.615326 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:24.615344 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:24.615351 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:24.615354 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:24.617415 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:25.111371 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:25.111393 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:25.111404 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:25.111411 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:25.113920 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:25.114543 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:25.114560 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:25.114567 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:25.114572 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:25.116526 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:25.611318 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:25.611338 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:25.611355 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:25.611358 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:25.613968 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:25.614823 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:25.614843 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:25.614852 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:25.614858 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:25.617009 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:26.112075 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:26.112097 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:26.112106 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:26.112116 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:26.114616 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:26.115272 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:26.115289 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:26.115297 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:26.115306 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:26.117448 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:26.612343 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:26.612366 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:26.612374 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:26.612377 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:26.615029 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:26.615702 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:26.615720 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:26.615726 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:26.615730 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:26.617710 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:26.618300 1089722 pod_ready.go:103] pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace has status "Ready":"False"
	I1205 19:24:27.111658 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:27.111679 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:27.111688 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:27.111693 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:27.114432 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:27.115044 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:27.115061 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:27.115072 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:27.115079 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:27.117245 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:27.612068 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:27.612089 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:27.612097 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:27.612102 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:27.614884 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:27.615480 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:27.615498 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:27.615505 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:27.615509 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:27.617509 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:28.112156 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:28.112177 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:28.112185 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:28.112189 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:28.114786 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:28.115563 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:28.115583 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:28.115595 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:28.115602 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:28.117672 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:28.611481 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:28.611502 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:28.611511 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:28.611516 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:28.614199 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:28.614960 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:28.614976 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:28.614983 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:28.614988 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:28.617189 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:29.111791 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:29.111810 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:29.111819 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:29.111825 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:29.114007 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:29.114719 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:29.114738 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:29.114746 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:29.114752 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:29.116552 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:29.116918 1089722 pod_ready.go:103] pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace has status "Ready":"False"
	I1205 19:24:29.612334 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:29.612356 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:29.612364 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:29.612372 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:29.615097 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:29.615746 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:29.615765 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:29.615773 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:29.615776 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:29.617884 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:30.112031 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:30.112050 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:30.112058 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:30.112061 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:30.114623 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:30.115311 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:30.115328 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:30.115337 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:30.115341 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:30.117379 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:30.612240 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:30.612259 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:30.612268 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:30.612273 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:30.614934 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:30.615567 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:30.615586 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:30.615595 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:30.615601 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:30.617586 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:31.111441 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:31.111461 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:31.111477 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:31.111481 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:31.114061 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:31.114750 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:31.114770 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:31.114781 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:31.114786 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:31.116836 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:31.117242 1089722 pod_ready.go:103] pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace has status "Ready":"False"
	I1205 19:24:31.611596 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:31.611616 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:31.611624 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:31.611628 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:31.614522 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:31.615188 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:31.615205 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:31.615212 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:31.615218 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:31.617364 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:32.112247 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:32.112268 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:32.112276 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:32.112281 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:32.115183 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:32.115894 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:32.115911 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:32.115920 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:32.115925 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:32.118058 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:32.611826 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:32.611845 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:32.611853 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:32.611858 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:32.614556 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:32.615213 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:32.615231 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:32.615238 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:32.615242 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:32.617238 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:33.112071 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:33.112092 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:33.112101 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:33.112105 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:33.114845 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:33.115666 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:33.115684 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:33.115696 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:33.115705 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:33.117882 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:33.118479 1089722 pod_ready.go:103] pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace has status "Ready":"False"
	I1205 19:24:33.611650 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:33.611671 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:33.611680 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:33.611683 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:33.614373 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:33.615076 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:33.615092 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:33.615099 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:33.615102 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:33.617226 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:34.112192 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:34.112214 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:34.112222 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:34.112227 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:34.114965 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:34.115744 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:34.115765 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:34.115776 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:34.115780 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:34.117940 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:34.611733 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:34.611752 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:34.611761 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:34.611766 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:34.614389 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:34.615032 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:34.615047 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:34.615055 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:34.615060 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:34.617153 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:35.112104 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:35.112126 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:35.112135 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:35.112139 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:35.114983 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:35.115621 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:35.115637 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:35.115645 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:35.115650 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:35.117942 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:35.611827 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:35.611848 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:35.611856 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:35.611860 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:35.614669 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:35.615268 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:35.615284 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:35.615292 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:35.615297 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:35.617261 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:35.617846 1089722 pod_ready.go:103] pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace has status "Ready":"False"
	I1205 19:24:36.112236 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:36.112256 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:36.112264 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:36.112267 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:36.115038 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:36.115692 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:36.115709 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:36.115716 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:36.115720 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:36.117730 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:36.612171 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:36.612197 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:36.612208 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:36.612213 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:36.615395 1089722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:24:36.616043 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:36.616063 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:36.616071 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:36.616073 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:36.618356 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:37.111794 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:37.111821 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:37.111836 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:37.111841 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:37.116599 1089722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:24:37.117326 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:37.117345 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:37.117353 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:37.117363 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:37.119370 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:37.612225 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:37.612245 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:37.612253 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:37.612257 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:37.614983 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:37.615676 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:37.615695 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:37.615702 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:37.615721 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:37.617735 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:37.618271 1089722 pod_ready.go:103] pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace has status "Ready":"False"
	I1205 19:24:38.111558 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2n94f
	I1205 19:24:38.111580 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:38.111589 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:38.111594 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:38.114230 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:38.114910 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:38.114926 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:38.114934 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:38.114937 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:38.116930 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:38.117446 1089722 pod_ready.go:98] node "ha-392363" hosting pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-392363" has status "Ready":"Unknown"
	I1205 19:24:38.117465 1089722 pod_ready.go:82] duration metric: took 20.506308352s for pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace to be "Ready" ...
	E1205 19:24:38.117475 1089722 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-392363" hosting pod "coredns-7c65d6cfc9-2n94f" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-392363" has status "Ready":"Unknown"
	I1205 19:24:38.117485 1089722 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4wfjm" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:38.117548 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4wfjm
	I1205 19:24:38.117559 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:38.117566 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:38.117574 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:38.119487 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:38.120051 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:38.120070 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:38.120079 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:38.120087 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:38.121896 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:38.122339 1089722 pod_ready.go:98] node "ha-392363" hosting pod "coredns-7c65d6cfc9-4wfjm" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-392363" has status "Ready":"Unknown"
	I1205 19:24:38.122357 1089722 pod_ready.go:82] duration metric: took 4.865039ms for pod "coredns-7c65d6cfc9-4wfjm" in "kube-system" namespace to be "Ready" ...
	E1205 19:24:38.122371 1089722 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-392363" hosting pod "coredns-7c65d6cfc9-4wfjm" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-392363" has status "Ready":"Unknown"
	I1205 19:24:38.122380 1089722 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-392363" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:38.122440 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-392363
	I1205 19:24:38.122447 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:38.122454 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:38.122459 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:38.124045 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:38.124567 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:38.124584 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:38.124594 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:38.124601 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:38.127987 1089722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:24:38.128451 1089722 pod_ready.go:98] node "ha-392363" hosting pod "etcd-ha-392363" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-392363" has status "Ready":"Unknown"
	I1205 19:24:38.128469 1089722 pod_ready.go:82] duration metric: took 6.081255ms for pod "etcd-ha-392363" in "kube-system" namespace to be "Ready" ...
	E1205 19:24:38.128475 1089722 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-392363" hosting pod "etcd-ha-392363" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-392363" has status "Ready":"Unknown"
	I1205 19:24:38.128483 1089722 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:38.128528 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-392363-m02
	I1205 19:24:38.128534 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:38.128541 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:38.128546 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:38.130355 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:38.130832 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:24:38.130845 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:38.130852 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:38.130857 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:38.132501 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:38.132902 1089722 pod_ready.go:93] pod "etcd-ha-392363-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:24:38.132918 1089722 pod_ready.go:82] duration metric: took 4.427918ms for pod "etcd-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:38.132934 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-392363" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:38.132985 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-392363
	I1205 19:24:38.132994 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:38.133000 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:38.133005 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:38.134818 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:38.135423 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:38.135438 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:38.135445 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:38.135448 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:38.137058 1089722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:24:38.137585 1089722 pod_ready.go:98] node "ha-392363" hosting pod "kube-apiserver-ha-392363" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-392363" has status "Ready":"Unknown"
	I1205 19:24:38.137608 1089722 pod_ready.go:82] duration metric: took 4.663391ms for pod "kube-apiserver-ha-392363" in "kube-system" namespace to be "Ready" ...
	E1205 19:24:38.137618 1089722 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-392363" hosting pod "kube-apiserver-ha-392363" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-392363" has status "Ready":"Unknown"
	I1205 19:24:38.137626 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:38.312012 1089722 request.go:632] Waited for 174.309962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-392363-m02
	I1205 19:24:38.312070 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-392363-m02
	I1205 19:24:38.312091 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:38.312102 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:38.312106 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:38.314714 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:38.511673 1089722 request.go:632] Waited for 196.272658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:24:38.511765 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:24:38.511773 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:38.511787 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:38.511797 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:38.514420 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:38.514941 1089722 pod_ready.go:93] pod "kube-apiserver-ha-392363-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:24:38.514963 1089722 pod_ready.go:82] duration metric: took 377.324076ms for pod "kube-apiserver-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:38.514975 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-392363" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:38.711961 1089722 request.go:632] Waited for 196.89014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-392363
	I1205 19:24:38.712025 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-392363
	I1205 19:24:38.712033 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:38.712045 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:38.712055 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:38.714691 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:38.911641 1089722 request.go:632] Waited for 196.271752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:38.911727 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:38.911738 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:38.911750 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:38.911763 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:38.914188 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:38.914739 1089722 pod_ready.go:98] node "ha-392363" hosting pod "kube-controller-manager-ha-392363" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-392363" has status "Ready":"Unknown"
	I1205 19:24:38.914762 1089722 pod_ready.go:82] duration metric: took 399.780459ms for pod "kube-controller-manager-ha-392363" in "kube-system" namespace to be "Ready" ...
	E1205 19:24:38.914771 1089722 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-392363" hosting pod "kube-controller-manager-ha-392363" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-392363" has status "Ready":"Unknown"
	I1205 19:24:38.914779 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:39.111815 1089722 request.go:632] Waited for 196.941489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-392363-m02
	I1205 19:24:39.111911 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-392363-m02
	I1205 19:24:39.111920 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:39.111929 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:39.111935 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:39.114297 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:39.312417 1089722 request.go:632] Waited for 197.358296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:24:39.312472 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:24:39.312477 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:39.312485 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:39.312490 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:39.314864 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:39.315395 1089722 pod_ready.go:93] pod "kube-controller-manager-ha-392363-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:24:39.315415 1089722 pod_ready.go:82] duration metric: took 400.628277ms for pod "kube-controller-manager-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:39.315425 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fz7rx" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:39.512364 1089722 request.go:632] Waited for 196.844453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fz7rx
	I1205 19:24:39.512421 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fz7rx
	I1205 19:24:39.512426 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:39.512437 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:39.512442 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:39.515063 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:39.712074 1089722 request.go:632] Waited for 196.359163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:39.712163 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m04
	I1205 19:24:39.712175 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:39.712186 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:39.712194 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:39.714785 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:39.715330 1089722 pod_ready.go:93] pod "kube-proxy-fz7rx" in "kube-system" namespace has status "Ready":"True"
	I1205 19:24:39.715351 1089722 pod_ready.go:82] duration metric: took 399.918258ms for pod "kube-proxy-fz7rx" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:39.715363 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kpdtp" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:39.912269 1089722 request.go:632] Waited for 196.815774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kpdtp
	I1205 19:24:39.912341 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kpdtp
	I1205 19:24:39.912350 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:39.912363 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:39.912371 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:39.914984 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:40.111792 1089722 request.go:632] Waited for 196.110621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:24:40.111848 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:24:40.111854 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:40.111861 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:40.111870 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:40.114212 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:40.114717 1089722 pod_ready.go:93] pod "kube-proxy-kpdtp" in "kube-system" namespace has status "Ready":"True"
	I1205 19:24:40.114736 1089722 pod_ready.go:82] duration metric: took 399.366807ms for pod "kube-proxy-kpdtp" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:40.114745 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wz9hx" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:40.311662 1089722 request.go:632] Waited for 196.842427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz9hx
	I1205 19:24:40.311738 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz9hx
	I1205 19:24:40.311746 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:40.311757 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:40.311763 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:40.314358 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:40.512415 1089722 request.go:632] Waited for 197.34909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:40.512508 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:40.512520 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:40.512532 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:40.512544 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:40.515229 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:40.515745 1089722 pod_ready.go:98] node "ha-392363" hosting pod "kube-proxy-wz9hx" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-392363" has status "Ready":"Unknown"
	I1205 19:24:40.515767 1089722 pod_ready.go:82] duration metric: took 401.016268ms for pod "kube-proxy-wz9hx" in "kube-system" namespace to be "Ready" ...
	E1205 19:24:40.515778 1089722 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-392363" hosting pod "kube-proxy-wz9hx" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-392363" has status "Ready":"Unknown"
	I1205 19:24:40.515791 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-392363" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:40.711832 1089722 request.go:632] Waited for 195.950735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-392363
	I1205 19:24:40.711890 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-392363
	I1205 19:24:40.711896 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:40.711904 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:40.711909 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:40.714360 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:40.912289 1089722 request.go:632] Waited for 197.342128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:40.912396 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363
	I1205 19:24:40.912408 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:40.912417 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:40.912423 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:40.915132 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:40.915671 1089722 pod_ready.go:98] node "ha-392363" hosting pod "kube-scheduler-ha-392363" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-392363" has status "Ready":"Unknown"
	I1205 19:24:40.915693 1089722 pod_ready.go:82] duration metric: took 399.894438ms for pod "kube-scheduler-ha-392363" in "kube-system" namespace to be "Ready" ...
	E1205 19:24:40.915702 1089722 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-392363" hosting pod "kube-scheduler-ha-392363" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-392363" has status "Ready":"Unknown"
	I1205 19:24:40.915709 1089722 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:41.111745 1089722 request.go:632] Waited for 195.957508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-392363-m02
	I1205 19:24:41.111823 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-392363-m02
	I1205 19:24:41.111832 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:41.111846 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:41.111856 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:41.114692 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:41.311705 1089722 request.go:632] Waited for 196.268847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:24:41.311787 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-392363-m02
	I1205 19:24:41.311795 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:41.311803 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:41.311806 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:41.314443 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:41.314948 1089722 pod_ready.go:93] pod "kube-scheduler-ha-392363-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:24:41.314968 1089722 pod_ready.go:82] duration metric: took 399.250434ms for pod "kube-scheduler-ha-392363-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:24:41.314980 1089722 pod_ready.go:39] duration metric: took 23.713431405s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:24:41.315016 1089722 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:24:41.315073 1089722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:24:41.326139 1089722 system_svc.go:56] duration metric: took 11.114216ms WaitForService to wait for kubelet
	I1205 19:24:41.326171 1089722 kubeadm.go:582] duration metric: took 31.315834194s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:24:41.326196 1089722 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:24:41.511581 1089722 request.go:632] Waited for 185.279361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1205 19:24:41.511652 1089722 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I1205 19:24:41.511661 1089722 round_trippers.go:469] Request Headers:
	I1205 19:24:41.511668 1089722 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:24:41.511673 1089722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:24:41.514695 1089722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:24:41.516061 1089722 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 19:24:41.516089 1089722 node_conditions.go:123] node cpu capacity is 8
	I1205 19:24:41.516107 1089722 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 19:24:41.516113 1089722 node_conditions.go:123] node cpu capacity is 8
	I1205 19:24:41.516119 1089722 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 19:24:41.516124 1089722 node_conditions.go:123] node cpu capacity is 8
	I1205 19:24:41.516133 1089722 node_conditions.go:105] duration metric: took 189.926768ms to run NodePressure ...
	I1205 19:24:41.516148 1089722 start.go:241] waiting for startup goroutines ...
	I1205 19:24:41.516186 1089722 start.go:255] writing updated cluster config ...
	I1205 19:24:41.516606 1089722 ssh_runner.go:195] Run: rm -f paused
	I1205 19:24:41.564286 1089722 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 19:24:41.566205 1089722 out.go:177] * Done! kubectl is now configured to use "ha-392363" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 19:23:59 ha-392363 crio[684]: time="2024-12-05 19:23:59.569851681Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c51da728a6798017d18a924efba327eaf321f1fe32bbf020ddfe99e0f52bbb9a/merged/etc/group: no such file or directory"
	Dec 05 19:23:59 ha-392363 crio[684]: time="2024-12-05 19:23:59.603170360Z" level=info msg="Created container d317737d92b261311510c350a690d32bcdb85b7944400a4bfd38a48649057217: kube-system/kube-vip-ha-392363/kube-vip" id=bcb797de-4da7-4938-b285-1be7802dea50 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:23:59 ha-392363 crio[684]: time="2024-12-05 19:23:59.603699945Z" level=info msg="Starting container: d317737d92b261311510c350a690d32bcdb85b7944400a4bfd38a48649057217" id=de3b2a61-71f9-4141-881e-a9dbbde34bda name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 19:23:59 ha-392363 crio[684]: time="2024-12-05 19:23:59.608658750Z" level=info msg="Started container" PID=2042 containerID=d317737d92b261311510c350a690d32bcdb85b7944400a4bfd38a48649057217 description=kube-system/kube-vip-ha-392363/kube-vip id=de3b2a61-71f9-4141-881e-a9dbbde34bda name=/runtime.v1.RuntimeService/StartContainer sandboxID=e70a8ba44c403ff1964737f2c513091368694ce36ce3c503e47acaf9a9a0ac9f
	Dec 05 19:24:00 ha-392363 crio[684]: time="2024-12-05 19:24:00.340330026Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.2" id=4ce6ef4b-81e5-44a6-81d5-746ac22ce8eb name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:24:00 ha-392363 crio[684]: time="2024-12-05 19:24:00.340587423Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.2],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752],Size_:89474374,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=4ce6ef4b-81e5-44a6-81d5-746ac22ce8eb name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:24:00 ha-392363 crio[684]: time="2024-12-05 19:24:00.341308222Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.2" id=4af7b720-a09d-4b07-9979-5eec4a3eeaf7 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:24:00 ha-392363 crio[684]: time="2024-12-05 19:24:00.341528834Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.2],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752],Size_:89474374,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=4af7b720-a09d-4b07-9979-5eec4a3eeaf7 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:24:00 ha-392363 crio[684]: time="2024-12-05 19:24:00.342198576Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-392363/kube-controller-manager" id=b1d1c052-f5fc-4400-aaf9-161b6a3e66f0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:24:00 ha-392363 crio[684]: time="2024-12-05 19:24:00.342296515Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 19:24:00 ha-392363 crio[684]: time="2024-12-05 19:24:00.410470623Z" level=info msg="Created container aec5e3b26d0771f0a201bb44a1be899ed0c158ee299426e866024c3d273c60b7: kube-system/kube-controller-manager-ha-392363/kube-controller-manager" id=b1d1c052-f5fc-4400-aaf9-161b6a3e66f0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:24:00 ha-392363 crio[684]: time="2024-12-05 19:24:00.411044954Z" level=info msg="Starting container: aec5e3b26d0771f0a201bb44a1be899ed0c158ee299426e866024c3d273c60b7" id=882b8abd-ebb0-4c6e-b6fe-0c128a79cd7e name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 19:24:00 ha-392363 crio[684]: time="2024-12-05 19:24:00.416894876Z" level=info msg="Started container" PID=2090 containerID=aec5e3b26d0771f0a201bb44a1be899ed0c158ee299426e866024c3d273c60b7 description=kube-system/kube-controller-manager-ha-392363/kube-controller-manager id=882b8abd-ebb0-4c6e-b6fe-0c128a79cd7e name=/runtime.v1.RuntimeService/StartContainer sandboxID=9955a12684b309953111571a3551f0c53d0e673f8817b2ba25e1862b66e62764
	Dec 05 19:24:11 ha-392363 conmon[1467]: conmon e5d19b48b614bf94b349 <ninfo>: container 1490 exited with status 1
	Dec 05 19:24:11 ha-392363 crio[684]: time="2024-12-05 19:24:11.589338024Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=413c80f0-ba47-42b1-a625-0283e9427dd4 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:24:11 ha-392363 crio[684]: time="2024-12-05 19:24:11.589597576Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=413c80f0-ba47-42b1-a625-0283e9427dd4 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:24:11 ha-392363 crio[684]: time="2024-12-05 19:24:11.590286486Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6a1277cb-d829-4f33-83b2-1fa3e17f7668 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:24:11 ha-392363 crio[684]: time="2024-12-05 19:24:11.590486639Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6a1277cb-d829-4f33-83b2-1fa3e17f7668 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:24:11 ha-392363 crio[684]: time="2024-12-05 19:24:11.591103130Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ceced5b5-844b-4d45-bd1d-895423a4760e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:24:11 ha-392363 crio[684]: time="2024-12-05 19:24:11.591210884Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 19:24:11 ha-392363 crio[684]: time="2024-12-05 19:24:11.601479501Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/85a528952b44751647f8b411e9fd2958a1dc66fbd2d0bf43cab89d2cf13f8030/merged/etc/passwd: no such file or directory"
	Dec 05 19:24:11 ha-392363 crio[684]: time="2024-12-05 19:24:11.601521563Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/85a528952b44751647f8b411e9fd2958a1dc66fbd2d0bf43cab89d2cf13f8030/merged/etc/group: no such file or directory"
	Dec 05 19:24:11 ha-392363 crio[684]: time="2024-12-05 19:24:11.636336197Z" level=info msg="Created container d4b9bc8d4057ae9f20860e7b30f37f53453aaa887c95cbf5b08cbc769fdd5af6: kube-system/storage-provisioner/storage-provisioner" id=ceced5b5-844b-4d45-bd1d-895423a4760e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:24:11 ha-392363 crio[684]: time="2024-12-05 19:24:11.636832335Z" level=info msg="Starting container: d4b9bc8d4057ae9f20860e7b30f37f53453aaa887c95cbf5b08cbc769fdd5af6" id=9fd8820e-6804-46de-8998-a3118ad240b3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 19:24:11 ha-392363 crio[684]: time="2024-12-05 19:24:11.642138177Z" level=info msg="Started container" PID=2148 containerID=d4b9bc8d4057ae9f20860e7b30f37f53453aaa887c95cbf5b08cbc769fdd5af6 description=kube-system/storage-provisioner/storage-provisioner id=9fd8820e-6804-46de-8998-a3118ad240b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a715793062fe60fb6372961a3c2e817bb3bbd662a245410932dc1dae9d76ad1e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d4b9bc8d4057a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   31 seconds ago       Running             storage-provisioner       4                   a715793062fe6       storage-provisioner
	aec5e3b26d077       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   43 seconds ago       Running             kube-controller-manager   8                   9955a12684b30       kube-controller-manager-ha-392363
	d317737d92b26       4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812   43 seconds ago       Running             kube-vip                  3                   e70a8ba44c403       kube-vip-ha-392363
	9de90a5f2e926       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   46 seconds ago       Running             kube-apiserver            4                   5e6ce5f021d33       kube-apiserver-ha-392363
	6e1ea568057bb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   59 seconds ago       Running             coredns                   2                   9e95a7fe961d1       coredns-7c65d6cfc9-4wfjm
	a0cb85a049bb0       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   About a minute ago   Running             busybox                   2                   475a5fd9482c9       busybox-7dff88458-d5wq4
	7af8d9b0ead72       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   About a minute ago   Running             kube-proxy                2                   d42c149cd4e55       kube-proxy-wz9hx
	7c45bed10198b       9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5   About a minute ago   Running             kindnet-cni               2                   eda44a31e0750       kindnet-w8jfq
	52e1f4dceb3ae       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Running             coredns                   2                   5b3654a7704cd       coredns-7c65d6cfc9-2n94f
	e5d19b48b614b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       3                   a715793062fe6       storage-provisioner
	3e7aebbf675c1       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   About a minute ago   Exited              kube-controller-manager   7                   9955a12684b30       kube-controller-manager-ha-392363
	2c891ea8f3512       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   About a minute ago   Exited              kube-apiserver            3                   5e6ce5f021d33       kube-apiserver-ha-392363
	63964d5bac586       4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812   About a minute ago   Exited              kube-vip                  2                   e70a8ba44c403       kube-vip-ha-392363
	ea568b79de478       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Running             etcd                      2                   29df0e8d386ab       etcd-ha-392363
	7ca8c0a1f930b       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   About a minute ago   Running             kube-scheduler            2                   5e7b4b1285477       kube-scheduler-ha-392363
	
	
	==> coredns [52e1f4dceb3aed0bc554a18d80288a1d8dc727eea24f0f6ab13f74e1404e8469] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48972 - 5346 "HINFO IN 4527023882567072241.6452072135725259943. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008968335s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[956880822]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Dec-2024 19:23:41.436) (total time: 30000ms):
	Trace[956880822]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:24:11.437)
	Trace[956880822]: [30.000745134s] [30.000745134s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[524541877]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Dec-2024 19:23:41.436) (total time: 30000ms):
	Trace[524541877]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:24:11.437)
	Trace[524541877]: [30.000852816s] [30.000852816s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[908345213]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Dec-2024 19:23:41.436) (total time: 30000ms):
	Trace[908345213]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:24:11.437)
	Trace[908345213]: [30.00088709s] [30.00088709s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [6e1ea568057bb15a7d8f97ea4da6eebcb83ed820eb4638f68641e47a67b737f1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53411 - 39300 "HINFO IN 7470254636991327657.152591403043276592. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010991507s
	
	
	==> describe nodes <==
	Name:               ha-392363
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-392363
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-392363
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T19_15_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:15:48 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-392363
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:24:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 05 Dec 2024 19:23:24 +0000   Thu, 05 Dec 2024 19:24:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 05 Dec 2024 19:23:24 +0000   Thu, 05 Dec 2024 19:24:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 05 Dec 2024 19:23:24 +0000   Thu, 05 Dec 2024 19:24:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 05 Dec 2024 19:23:24 +0000   Thu, 05 Dec 2024 19:24:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-392363
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6c664e4239e4a8a98172fc1d64800cc
	  System UUID:                e8e8d9a2-5673-4c47-9b98-a9204447d756
	  Boot ID:                    63e29e64-0755-4812-a891-d8a092e25c6a
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d5wq4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m38s
	  kube-system                 coredns-7c65d6cfc9-2n94f             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m50s
	  kube-system                 coredns-7c65d6cfc9-4wfjm             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m50s
	  kube-system                 etcd-ha-392363                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m55s
	  kube-system                 kindnet-w8jfq                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m51s
	  kube-system                 kube-apiserver-ha-392363             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m55s
	  kube-system                 kube-controller-manager-ha-392363    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m55s
	  kube-system                 kube-proxy-wz9hx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m51s
	  kube-system                 kube-scheduler-ha-392363             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m55s
	  kube-system                 kube-vip-ha-392363                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m49s                  kube-proxy       
	  Normal   Starting                 61s                    kube-proxy       
	  Normal   Starting                 4m35s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  8m55s                  kubelet          Node ha-392363 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 8m55s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 8m55s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    8m55s                  kubelet          Node ha-392363 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m55s                  kubelet          Node ha-392363 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m51s                  node-controller  Node ha-392363 event: Registered Node ha-392363 in Controller
	  Normal   NodeReady                8m37s                  kubelet          Node ha-392363 status is now: NodeReady
	  Normal   RegisteredNode           8m28s                  node-controller  Node ha-392363 event: Registered Node ha-392363 in Controller
	  Normal   RegisteredNode           7m55s                  node-controller  Node ha-392363 event: Registered Node ha-392363 in Controller
	  Normal   RegisteredNode           6m3s                   node-controller  Node ha-392363 event: Registered Node ha-392363 in Controller
	  Normal   NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node ha-392363 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node ha-392363 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node ha-392363 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 5m24s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 5m24s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m48s                  node-controller  Node ha-392363 event: Registered Node ha-392363 in Controller
	  Normal   RegisteredNode           4m2s                   node-controller  Node ha-392363 event: Registered Node ha-392363 in Controller
	  Normal   RegisteredNode           3m26s                  node-controller  Node ha-392363 event: Registered Node ha-392363 in Controller
	  Normal   Starting                 117s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  117s (x8 over 117s)    kubelet          Node ha-392363 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s (x8 over 117s)    kubelet          Node ha-392363 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s (x7 over 117s)    kubelet          Node ha-392363 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           81s                    node-controller  Node ha-392363 event: Registered Node ha-392363 in Controller
	  Normal   RegisteredNode           41s                    node-controller  Node ha-392363 event: Registered Node ha-392363 in Controller
	  Normal   NodeNotReady             6s                     node-controller  Node ha-392363 status is now: NodeNotReady
	
	
	Name:               ha-392363-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-392363-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-392363
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T19_16_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:16:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-392363-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:24:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:23:26 +0000   Thu, 05 Dec 2024 19:16:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:23:26 +0000   Thu, 05 Dec 2024 19:16:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:23:26 +0000   Thu, 05 Dec 2024 19:16:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:23:26 +0000   Thu, 05 Dec 2024 19:16:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-392363-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 526f89865f4f45d695d9bd3847b9eb10
	  System UUID:                68f5644a-ba93-4d93-9737-8d45c48496f5
	  Boot ID:                    63e29e64-0755-4812-a891-d8a092e25c6a
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-57mhb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m38s
	  kube-system                 etcd-ha-392363-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m35s
	  kube-system                 kindnet-xp8pn                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m37s
	  kube-system                 kube-apiserver-ha-392363-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 kube-controller-manager-ha-392363-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 kube-proxy-kpdtp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-scheduler-ha-392363-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 kube-vip-ha-392363-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m34s                  kube-proxy       
	  Normal   Starting                 6m8s                   kube-proxy       
	  Normal   Starting                 4m45s                  kube-proxy       
	  Normal   Starting                 71s                    kube-proxy       
	  Normal   NodeHasSufficientPID     8m37s (x7 over 8m37s)  kubelet          Node ha-392363-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    8m37s (x8 over 8m37s)  kubelet          Node ha-392363-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  8m37s (x8 over 8m37s)  kubelet          Node ha-392363-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           8m36s                  node-controller  Node ha-392363-m02 event: Registered Node ha-392363-m02 in Controller
	  Normal   RegisteredNode           8m28s                  node-controller  Node ha-392363-m02 event: Registered Node ha-392363-m02 in Controller
	  Normal   RegisteredNode           7m55s                  node-controller  Node ha-392363-m02 event: Registered Node ha-392363-m02 in Controller
	  Normal   NodeHasSufficientPID     6m30s (x7 over 6m30s)  kubelet          Node ha-392363-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m30s (x8 over 6m30s)  kubelet          Node ha-392363-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m30s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m30s (x8 over 6m30s)  kubelet          Node ha-392363-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           6m3s                   node-controller  Node ha-392363-m02 event: Registered Node ha-392363-m02 in Controller
	  Normal   NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node ha-392363-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     5m22s (x7 over 5m22s)  kubelet          Node ha-392363-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node ha-392363-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m22s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m22s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           4m48s                  node-controller  Node ha-392363-m02 event: Registered Node ha-392363-m02 in Controller
	  Normal   RegisteredNode           4m2s                   node-controller  Node ha-392363-m02 event: Registered Node ha-392363-m02 in Controller
	  Normal   RegisteredNode           3m26s                  node-controller  Node ha-392363-m02 event: Registered Node ha-392363-m02 in Controller
	  Normal   Starting                 116s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 116s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  116s (x8 over 116s)    kubelet          Node ha-392363-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    116s (x8 over 116s)    kubelet          Node ha-392363-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s (x7 over 116s)    kubelet          Node ha-392363-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           81s                    node-controller  Node ha-392363-m02 event: Registered Node ha-392363-m02 in Controller
	  Normal   RegisteredNode           41s                    node-controller  Node ha-392363-m02 event: Registered Node ha-392363-m02 in Controller
	
	
	Name:               ha-392363-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-392363-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-392363
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T19_17_22_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:17:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-392363-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:24:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:24:17 +0000   Thu, 05 Dec 2024 19:24:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:24:17 +0000   Thu, 05 Dec 2024 19:24:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:24:17 +0000   Thu, 05 Dec 2024 19:24:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:24:17 +0000   Thu, 05 Dec 2024 19:24:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-392363-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 290819ee2ad949c59e0007ce82718734
	  System UUID:                50adf6cb-352d-4dd5-8ca0-67ae40d2014c
	  Boot ID:                    63e29e64-0755-4812-a891-d8a092e25c6a
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lgm67    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 kindnet-4kzwv              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m22s
	  kube-system                 kube-proxy-fz7rx           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m55s                  kube-proxy       
	  Normal   Starting                 19s                    kube-proxy       
	  Normal   Starting                 7m19s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    7m22s (x2 over 7m22s)  kubelet          Node ha-392363-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m22s (x2 over 7m22s)  kubelet          Node ha-392363-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  7m22s (x2 over 7m22s)  kubelet          Node ha-392363-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           7m21s                  node-controller  Node ha-392363-m04 event: Registered Node ha-392363-m04 in Controller
	  Normal   RegisteredNode           7m20s                  node-controller  Node ha-392363-m04 event: Registered Node ha-392363-m04 in Controller
	  Normal   RegisteredNode           7m18s                  node-controller  Node ha-392363-m04 event: Registered Node ha-392363-m04 in Controller
	  Normal   NodeReady                7m7s                   kubelet          Node ha-392363-m04 status is now: NodeReady
	  Normal   RegisteredNode           6m3s                   node-controller  Node ha-392363-m04 event: Registered Node ha-392363-m04 in Controller
	  Normal   RegisteredNode           4m48s                  node-controller  Node ha-392363-m04 event: Registered Node ha-392363-m04 in Controller
	  Normal   NodeNotReady             4m8s                   node-controller  Node ha-392363-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           4m2s                   node-controller  Node ha-392363-m04 event: Registered Node ha-392363-m04 in Controller
	  Normal   RegisteredNode           3m26s                  node-controller  Node ha-392363-m04 event: Registered Node ha-392363-m04 in Controller
	  Normal   Starting                 3m12s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m12s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     3m5s (x7 over 3m12s)   kubelet          Node ha-392363-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m59s (x8 over 3m12s)  kubelet          Node ha-392363-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m59s (x8 over 3m12s)  kubelet          Node ha-392363-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           81s                    node-controller  Node ha-392363-m04 event: Registered Node ha-392363-m04 in Controller
	  Normal   NodeNotReady             41s                    node-controller  Node ha-392363-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           41s                    node-controller  Node ha-392363-m04 event: Registered Node ha-392363-m04 in Controller
	  Normal   Starting                 39s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 39s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     32s (x7 over 39s)      kubelet          Node ha-392363-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    26s (x8 over 39s)      kubelet          Node ha-392363-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  26s (x8 over 39s)      kubelet          Node ha-392363-m04 status is now: NodeHasSufficientMemory
	
	
	==> dmesg <==
	[  -0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-9251d5f0ef75
	[  +0.000003] ll header: 00000000: 02 42 60 ab 6a 42 02 42 c0 a8 31 02 08 00
	[  +0.000002] ll header: 00000000: 02 42 60 ab 6a 42 02 42 c0 a8 31 02 08 00
	[  +1.000737] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-9251d5f0ef75
	[  +0.000009] ll header: 00000000: 02 42 60 ab 6a 42 02 42 c0 a8 31 02 08 00
	[  +0.000013] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-9251d5f0ef75
	[  +0.000007] ll header: 00000000: 02 42 60 ab 6a 42 02 42 c0 a8 31 02 08 00
	[  +0.003972] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-9251d5f0ef75
	[  +0.000006] ll header: 00000000: 02 42 60 ab 6a 42 02 42 c0 a8 31 02 08 00
	[  +2.015841] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-9251d5f0ef75
	[  +0.000006] ll header: 00000000: 02 42 60 ab 6a 42 02 42 c0 a8 31 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-9251d5f0ef75
	[  +0.000005] ll header: 00000000: 02 42 60 ab 6a 42 02 42 c0 a8 31 02 08 00
	[  +4.251675] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-9251d5f0ef75
	[  +0.000006] ll header: 00000000: 02 42 60 ab 6a 42 02 42 c0 a8 31 02 08 00
	[  +0.004004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-9251d5f0ef75
	[  +0.000006] ll header: 00000000: 02 42 60 ab 6a 42 02 42 c0 a8 31 02 08 00
	[  +0.000036] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-9251d5f0ef75
	[  +0.000012] ll header: 00000000: 02 42 60 ab 6a 42 02 42 c0 a8 31 02 08 00
	[  +8.187295] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-9251d5f0ef75
	[  +0.000006] ll header: 00000000: 02 42 60 ab 6a 42 02 42 c0 a8 31 02 08 00
	[  +0.003987] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-9251d5f0ef75
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-9251d5f0ef75
	[  +0.000005] ll header: 00000000: 02 42 60 ab 6a 42 02 42 c0 a8 31 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 60 ab 6a 42 02 42 c0 a8 31 02 08 00
	
	
	==> etcd [ea568b79de478ca1b6298d3976ef746dbaddde659526be41a310170982bce064] <==
	{"level":"warn","ts":"2024-12-05T19:23:17.201530Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:23:09.520174Z","time spent":"7.681345283s","remote":"127.0.0.1:49836","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":0,"response size":0,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 "}
	{"level":"warn","ts":"2024-12-05T19:23:17.200470Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.680441038s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" limit:10000 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-12-05T19:23:17.203784Z","caller":"traceutil/trace.go:171","msg":"trace[982333368] range","detail":"{range_begin:/registry/replicasets/; range_end:/registry/replicasets0; }","duration":"7.683757493s","start":"2024-12-05T19:23:09.520015Z","end":"2024-12-05T19:23:17.203772Z","steps":["trace[982333368] 'agreement among raft nodes before linearized reading'  (duration: 7.680439829s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:23:17.203821Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:23:09.519998Z","time spent":"7.683811558s","remote":"127.0.0.1:50138","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" limit:10000 "}
	{"level":"warn","ts":"2024-12-05T19:23:17.200499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.680530985s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" limit:10000 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-12-05T19:23:17.203850Z","caller":"traceutil/trace.go:171","msg":"trace[597155018] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; }","duration":"7.683884696s","start":"2024-12-05T19:23:09.519959Z","end":"2024-12-05T19:23:17.203844Z","steps":["trace[597155018] 'agreement among raft nodes before linearized reading'  (duration: 7.680531138s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:23:17.203869Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:23:09.519937Z","time spent":"7.683926494s","remote":"127.0.0.1:49700","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" limit:10000 "}
	{"level":"info","ts":"2024-12-05T19:23:17.201638Z","caller":"traceutil/trace.go:171","msg":"trace[2037520023] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; }","duration":"7.678609156s","start":"2024-12-05T19:23:09.523020Z","end":"2024-12-05T19:23:17.201629Z","steps":["trace[2037520023] 'agreement among raft nodes before linearized reading'  (duration: 7.677451845s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:23:17.203902Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:23:09.523005Z","time spent":"7.680889114s","remote":"127.0.0.1:49860","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":0,"request content":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:10000 "}
	{"level":"warn","ts":"2024-12-05T19:23:17.200493Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.677478931s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" limit:10000 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-12-05T19:23:17.203931Z","caller":"traceutil/trace.go:171","msg":"trace[2002027419] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; }","duration":"7.680916516s","start":"2024-12-05T19:23:09.523009Z","end":"2024-12-05T19:23:17.203925Z","steps":["trace[2002027419] 'agreement among raft nodes before linearized reading'  (duration: 7.677479083s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:23:17.203950Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:23:09.522992Z","time spent":"7.680952997s","remote":"127.0.0.1:49828","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" limit:10000 "}
	{"level":"warn","ts":"2024-12-05T19:23:17.200476Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.680474944s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" limit:10000 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-12-05T19:23:17.203986Z","caller":"traceutil/trace.go:171","msg":"trace[1773290644] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; }","duration":"7.683986098s","start":"2024-12-05T19:23:09.519993Z","end":"2024-12-05T19:23:17.203979Z","steps":["trace[1773290644] 'agreement among raft nodes before linearized reading'  (duration: 7.680475416s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:23:17.204008Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:23:09.519969Z","time spent":"7.68403168s","remote":"127.0.0.1:49928","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" limit:10000 "}
	{"level":"warn","ts":"2024-12-05T19:23:17.200476Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.677882993s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:10000 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-12-05T19:23:17.204037Z","caller":"traceutil/trace.go:171","msg":"trace[563280834] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; }","duration":"7.681443926s","start":"2024-12-05T19:23:09.522588Z","end":"2024-12-05T19:23:17.204032Z","steps":["trace[563280834] 'agreement among raft nodes before linearized reading'  (duration: 7.6778829s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:23:17.204056Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:23:09.522567Z","time spent":"7.681482479s","remote":"127.0.0.1:49796","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:10000 "}
	{"level":"warn","ts":"2024-12-05T19:23:17.201724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.677695028s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-392363-m02\" ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-12-05T19:23:17.204083Z","caller":"traceutil/trace.go:171","msg":"trace[412812201] range","detail":"{range_begin:/registry/minions/ha-392363-m02; range_end:; }","duration":"8.680057719s","start":"2024-12-05T19:23:08.524020Z","end":"2024-12-05T19:23:17.204078Z","steps":["trace[412812201] 'agreement among raft nodes before linearized reading'  (duration: 8.677694424s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:23:17.204098Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:23:08.523985Z","time spent":"8.680108993s","remote":"127.0.0.1:49796","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":0,"response size":0,"request content":"key:\"/registry/minions/ha-392363-m02\" "}
	{"level":"warn","ts":"2024-12-05T19:23:17.206331Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"688.137107ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:23:17.206383Z","caller":"traceutil/trace.go:171","msg":"trace[179549606] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:2327; }","duration":"688.195013ms","start":"2024-12-05T19:23:16.518178Z","end":"2024-12-05T19:23:17.206373Z","steps":["trace[179549606] 'agreement among raft nodes before linearized reading'  (duration: 688.081165ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:23:17.206413Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:23:16.518140Z","time spent":"688.265263ms","remote":"127.0.0.1:49638","response type":"/etcdserverpb.KV/Range","request count":0,"request size":121,"response count":0,"response size":29,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 "}
	
	
	==> kernel <==
	 19:24:43 up 22:07,  0 users,  load average: 0.43, 1.70, 19.70
	Linux ha-392363 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [7c45bed10198bea0f1f836d5087479f8b8c869c8f9fe4a6f35c157f3344c88c9] <==
	I1205 19:24:02.976083       1 main.go:324] Node ha-392363-m04 has CIDR [10.244.3.0/24] 
	I1205 19:24:12.978184       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1205 19:24:12.978223       1 main.go:324] Node ha-392363-m04 has CIDR [10.244.3.0/24] 
	I1205 19:24:12.978512       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:24:12.978544       1 main.go:301] handling current node
	I1205 19:24:12.978559       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1205 19:24:12.978564       1 main.go:324] Node ha-392363-m02 has CIDR [10.244.1.0/24] 
	I1205 19:24:22.978153       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:24:22.978232       1 main.go:301] handling current node
	I1205 19:24:22.978253       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1205 19:24:22.978261       1 main.go:324] Node ha-392363-m02 has CIDR [10.244.1.0/24] 
	I1205 19:24:22.978477       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1205 19:24:22.978492       1 main.go:324] Node ha-392363-m04 has CIDR [10.244.3.0/24] 
	I1205 19:24:32.981821       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:24:32.981853       1 main.go:301] handling current node
	I1205 19:24:32.981868       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1205 19:24:32.981873       1 main.go:324] Node ha-392363-m02 has CIDR [10.244.1.0/24] 
	I1205 19:24:32.982067       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1205 19:24:32.982079       1 main.go:324] Node ha-392363-m04 has CIDR [10.244.3.0/24] 
	I1205 19:24:42.974677       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:24:42.974719       1 main.go:301] handling current node
	I1205 19:24:42.974735       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1205 19:24:42.974740       1 main.go:324] Node ha-392363-m02 has CIDR [10.244.1.0/24] 
	I1205 19:24:42.974937       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1205 19:24:42.974950       1 main.go:324] Node ha-392363-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2c891ea8f3512b510eb68704b278e0ca661cb280ec8992b791acdcde8652490d] <==
	E1205 19:23:17.205973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingAdmissionPolicyBinding: failed to list *v1.ValidatingAdmissionPolicyBinding: etcdserver: leader changed" logger="UnhandledError"
	W1205 19:23:17.205929       1 reflector.go:561] pkg/client/informers/externalversions/factory.go:141: failed to list *v1.APIService: etcdserver: leader changed
	W1205 19:23:17.205931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: etcdserver: leader changed
	E1205 19:23:17.206018       1 reflector.go:158] "Unhandled Error" err="pkg/client/informers/externalversions/factory.go:141: Failed to watch *v1.APIService: failed to list *v1.APIService: etcdserver: leader changed" logger="UnhandledError"
	E1205 19:23:17.206036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: etcdserver: leader changed" logger="UnhandledError"
	I1205 19:23:17.613300       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 19:23:19.211004       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1205 19:23:19.211043       1 aggregator.go:171] initial CRD sync complete...
	I1205 19:23:19.211054       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 19:23:19.211062       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 19:23:19.311127       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1205 19:23:19.311146       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1205 19:23:19.529962       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1205 19:23:19.811720       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1205 19:23:19.984844       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1205 19:23:19.984871       1 policy_source.go:224] refreshing policies
	E1205 19:23:20.016396       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1205 19:23:20.211024       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1205 19:23:20.211044       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1205 19:23:20.211051       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 19:23:20.211153       1 cache.go:39] Caches are synced for autoregister controller
	I1205 19:23:20.216294       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1205 19:23:20.310673       1 shared_informer.go:320] Caches are synced for configmaps
	I1205 19:23:20.325056       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	F1205 19:23:55.610538       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-apiserver [9de90a5f2e926a20c9cade5ca26db8e593065e834451d2db62cad7dbc67390cc] <==
	I1205 19:23:57.832574       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1205 19:23:57.832672       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1205 19:23:57.832696       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I1205 19:23:57.832536       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I1205 19:23:57.974550       1 shared_informer.go:320] Caches are synced for configmaps
	I1205 19:23:57.984449       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1205 19:23:57.984573       1 policy_source.go:224] refreshing policies
	I1205 19:23:57.987214       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1205 19:23:57.987306       1 aggregator.go:171] initial CRD sync complete...
	I1205 19:23:57.987338       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 19:23:57.987369       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 19:23:57.987397       1 cache.go:39] Caches are synced for autoregister controller
	I1205 19:23:57.996695       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 19:23:58.010090       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1205 19:23:58.010104       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1205 19:23:58.075047       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1205 19:23:58.075763       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1205 19:23:58.075835       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1205 19:23:58.076057       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 19:23:58.077860       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1205 19:23:58.085067       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1205 19:23:58.837924       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1205 19:23:59.107943       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1205 19:23:59.109088       1 controller.go:615] quota admission added evaluator for: endpoints
	I1205 19:23:59.113961       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3e7aebbf675c1a84b50aae18c7d99970eedd3f873c76658026378f3a7882168d] <==
	I1205 19:23:29.944515       1 serving.go:386] Generated self-signed cert in-memory
	I1205 19:23:30.237471       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1205 19:23:30.237496       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:23:30.238840       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1205 19:23:30.238852       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1205 19:23:30.239069       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1205 19:23:30.239160       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1205 19:23:40.247473       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [aec5e3b26d0771f0a201bb44a1be899ed0c158ee299426e866024c3d273c60b7] <==
	I1205 19:24:03.224599       1 shared_informer.go:320] Caches are synced for garbage collector
	I1205 19:24:07.646209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-392363-m04"
	I1205 19:24:08.022038       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-392363-m04"
	I1205 19:24:17.536240       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-392363-m04"
	I1205 19:24:17.536288       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-392363-m04"
	I1205 19:24:17.548591       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-392363-m04"
	I1205 19:24:17.619999       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-392363-m04"
	I1205 19:24:23.790579       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="72.913µs"
	I1205 19:24:24.879341       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.031685ms"
	I1205 19:24:24.879421       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.4µs"
	I1205 19:24:37.969094       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-392363-m04"
	I1205 19:24:37.969631       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-392363"
	I1205 19:24:37.983611       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-392363"
	I1205 19:24:38.019829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.172002ms"
	I1205 19:24:38.020867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.067µs"
	I1205 19:24:38.035322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.844772ms"
	I1205 19:24:38.035462       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="97.33µs"
	I1205 19:24:38.052049       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-xjd7h EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-xjd7h\": the object has been modified; please apply your changes to the latest version and try again"
	I1205 19:24:38.052133       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6e301cb4-00b3-4aa2-a206-63885175ff48", APIVersion:"v1", ResourceVersion:"247", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-xjd7h EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-xjd7h": the object has been modified; please apply your changes to the latest version and try again
	I1205 19:24:42.686792       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-392363"
	I1205 19:24:43.153187       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-xjd7h EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-xjd7h\": the object has been modified; please apply your changes to the latest version and try again"
	I1205 19:24:43.154214       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6e301cb4-00b3-4aa2-a206-63885175ff48", APIVersion:"v1", ResourceVersion:"247", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-xjd7h EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-xjd7h": the object has been modified; please apply your changes to the latest version and try again
	I1205 19:24:43.184330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-392363"
	I1205 19:24:43.188898       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="53.788114ms"
	I1205 19:24:43.189015       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="74.477µs"
	
	
	==> kube-proxy [7af8d9b0ead72ee3569633c4f60f1fd410fbf934c75f32dce1a76f84e373a12e] <==
	I1205 19:23:42.439954       1 server_linux.go:66] "Using iptables proxy"
	I1205 19:23:42.543981       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1205 19:23:42.544042       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 19:23:42.563263       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 19:23:42.563309       1 server_linux.go:169] "Using iptables Proxier"
	I1205 19:23:42.565269       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 19:23:42.565653       1 server.go:483] "Version info" version="v1.31.2"
	I1205 19:23:42.565689       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:23:42.566794       1 config.go:105] "Starting endpoint slice config controller"
	I1205 19:23:42.566797       1 config.go:199] "Starting service config controller"
	I1205 19:23:42.566831       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 19:23:42.566838       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 19:23:42.566883       1 config.go:328] "Starting node config controller"
	I1205 19:23:42.566894       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 19:23:42.667857       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 19:23:42.667883       1 shared_informer.go:320] Caches are synced for node config
	I1205 19:23:42.667899       1 shared_informer.go:320] Caches are synced for service config
	W1205 19:24:43.078459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2641": http2: client connection lost
	E1205 19:24:43.078555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2641\": http2: client connection lost" logger="UnhandledError"
	W1205 19:24:43.078460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-392363&resourceVersion=2593": http2: client connection lost
	E1205 19:24:43.078602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-392363&resourceVersion=2593\": http2: client connection lost" logger="UnhandledError"
	W1205 19:24:43.078460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2594": http2: client connection lost
	E1205 19:24:43.078628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2594\": http2: client connection lost" logger="UnhandledError"
	
	
	==> kube-scheduler [7ca8c0a1f930b3fc242fdad172057f63f9d7cd4b8ae1a59e73023f879d520510] <==
	W1205 19:23:15.426654       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 19:23:15.426696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:23:15.734710       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:23:15.734755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1205 19:23:15.907725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:23:15.907768       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:23:17.725027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 19:23:17.725069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:23:18.238201       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 19:23:18.238246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1205 19:23:39.093627       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1205 19:23:57.989699       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:56986->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1205 19:23:57.989938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:56942->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1205 19:23:57.989961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:57090->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1205 19:23:57.990191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:57062->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1205 19:23:57.991915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:57048->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1205 19:23:57.992419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:56982->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1205 19:23:57.992525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:56976->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1205 19:23:57.992603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:56964->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1205 19:23:57.992658       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:57000->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1205 19:23:57.995830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:56958->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1205 19:23:57.996042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:57054->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1205 19:23:57.996119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:57016->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1205 19:23:57.996182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:57036->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1205 19:23:58.079765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:57076->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Dec 05 19:24:25 ha-392363 kubelet[839]: E1205 19:24:25.240967     839 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-392363?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 05 19:24:26 ha-392363 kubelet[839]: E1205 19:24:26.358513     839 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426666358308614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157330,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:24:26 ha-392363 kubelet[839]: E1205 19:24:26.358555     839 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426666358308614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157330,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:24:35 ha-392363 kubelet[839]: E1205 19:24:35.241824     839 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-392363?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 05 19:24:36 ha-392363 kubelet[839]: E1205 19:24:36.359497     839 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426676359330230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157330,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:24:36 ha-392363 kubelet[839]: E1205 19:24:36.359539     839 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426676359330230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157330,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:24:43 ha-392363 kubelet[839]: W1205 19:24:43.075422     839 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2486": http2: client connection lost
	Dec 05 19:24:43 ha-392363 kubelet[839]: W1205 19:24:43.075423     839 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-392363&resourceVersion=2483": http2: client connection lost
	Dec 05 19:24:43 ha-392363 kubelet[839]: W1205 19:24:43.075496     839 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2486": http2: client connection lost
	Dec 05 19:24:43 ha-392363 kubelet[839]: E1205 19:24:43.075499     839 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2486\": http2: client connection lost" logger="UnhandledError"
	Dec 05 19:24:43 ha-392363 kubelet[839]: W1205 19:24:43.075422     839 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-392363&resourceVersion=2640": http2: client connection lost
	Dec 05 19:24:43 ha-392363 kubelet[839]: E1205 19:24:43.075514     839 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-392363&resourceVersion=2483\": http2: client connection lost" logger="UnhandledError"
	Dec 05 19:24:43 ha-392363 kubelet[839]: E1205 19:24:43.075521     839 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-392363?timeout=10s\": http2: client connection lost"
	Dec 05 19:24:43 ha-392363 kubelet[839]: W1205 19:24:43.075527     839 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2486": http2: client connection lost
	Dec 05 19:24:43 ha-392363 kubelet[839]: W1205 19:24:43.075548     839 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2486": http2: client connection lost
	Dec 05 19:24:43 ha-392363 kubelet[839]: W1205 19:24:43.075581     839 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2486": http2: client connection lost
	Dec 05 19:24:43 ha-392363 kubelet[839]: E1205 19:24:43.075589     839 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2486\": http2: client connection lost" logger="UnhandledError"
	Dec 05 19:24:43 ha-392363 kubelet[839]: E1205 19:24:43.075525     839 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2486\": http2: client connection lost" logger="UnhandledError"
	Dec 05 19:24:43 ha-392363 kubelet[839]: I1205 19:24:43.075432     839 status_manager.go:851] "Failed to get status for pod" podUID="39c8ae5156a707023e6aebcfde98f304" pod="kube-system/kube-vip-ha-392363" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-392363\": http2: client connection lost"
	Dec 05 19:24:43 ha-392363 kubelet[839]: E1205 19:24:43.075606     839 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2486\": http2: client connection lost" logger="UnhandledError"
	Dec 05 19:24:43 ha-392363 kubelet[839]: E1205 19:24:43.075536     839 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-392363&resourceVersion=2640\": http2: client connection lost" logger="UnhandledError"
	Dec 05 19:24:43 ha-392363 kubelet[839]: W1205 19:24:43.075532     839 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2427": http2: client connection lost
	Dec 05 19:24:43 ha-392363 kubelet[839]: E1205 19:24:43.075965     839 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2486\": http2: client connection lost" logger="UnhandledError"
	Dec 05 19:24:43 ha-392363 kubelet[839]: E1205 19:24:43.076022     839 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2427\": http2: client connection lost" logger="UnhandledError"
	Dec 05 19:24:43 ha-392363 kubelet[839]: E1205 19:24:43.075519     839 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-392363.180e5e48ac52a907\": http2: client connection lost" event="&Event{ObjectMeta:{kube-apiserver-ha-392363.180e5e48ac52a907  kube-system   2460 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-392363,UID:d24462fe29aeef13d44b5970690f67d6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.2\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-392363,},FirstTimestamp:2024-12-05 19:22:52 +0000 UTC,LastTimestamp:2024-12-05 19:23:56.549629252 +0000 UTC m=+70.286360907,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Act
ion:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-392363,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-392363 -n ha-392363
helpers_test.go:261: (dbg) Run:  kubectl --context ha-392363 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (125.16s)

                                                
                                    

Test pass (301/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.72
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 5.46
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.26
18 TestDownloadOnly/v1.31.2/DeleteAll 0.32
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.2
20 TestDownloadOnlyKic 1.1
21 TestBinaryMirror 0.77
22 TestOffline 52.04
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 143.52
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 9.46
35 TestAddons/parallel/Registry 15.17
37 TestAddons/parallel/InspektorGadget 10.64
40 TestAddons/parallel/CSI 47.15
41 TestAddons/parallel/Headlamp 16.77
42 TestAddons/parallel/CloudSpanner 6.45
43 TestAddons/parallel/LocalPath 51.92
44 TestAddons/parallel/NvidiaDevicePlugin 5.49
45 TestAddons/parallel/Yakd 10.73
46 TestAddons/parallel/AmdGpuDevicePlugin 5.48
47 TestAddons/StoppedEnableDisable 12.08
48 TestCertOptions 29.05
49 TestCertExpiration 216.13
51 TestForceSystemdFlag 25.99
52 TestForceSystemdEnv 28.86
54 TestKVMDriverInstallOrUpdate 1.81
58 TestErrorSpam/setup 20.13
59 TestErrorSpam/start 0.57
60 TestErrorSpam/status 0.85
61 TestErrorSpam/pause 1.46
62 TestErrorSpam/unpause 1.68
63 TestErrorSpam/stop 1.34
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 42.03
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 27.03
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.03
75 TestFunctional/serial/CacheCmd/cache/add_local 0.89
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 39
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.3
86 TestFunctional/serial/LogsFileCmd 1.31
87 TestFunctional/serial/InvalidService 4.51
89 TestFunctional/parallel/ConfigCmd 0.36
90 TestFunctional/parallel/DashboardCmd 8.62
91 TestFunctional/parallel/DryRun 0.4
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 1.1
97 TestFunctional/parallel/ServiceCmdConnect 9.01
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 38.4
101 TestFunctional/parallel/SSHCmd 0.58
102 TestFunctional/parallel/CpCmd 1.82
103 TestFunctional/parallel/MySQL 18.38
104 TestFunctional/parallel/FileSync 0.4
105 TestFunctional/parallel/CertSync 2.3
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
113 TestFunctional/parallel/License 0.21
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.19
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
116 TestFunctional/parallel/ProfileCmd/profile_list 0.53
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
118 TestFunctional/parallel/MountCmd/any-port 7.84
119 TestFunctional/parallel/MountCmd/specific-port 1.72
120 TestFunctional/parallel/ServiceCmd/List 0.57
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
123 TestFunctional/parallel/MountCmd/VerifyCleanup 1.94
124 TestFunctional/parallel/ServiceCmd/Format 0.35
125 TestFunctional/parallel/ServiceCmd/URL 0.37
126 TestFunctional/parallel/Version/short 0.05
127 TestFunctional/parallel/Version/components 1.04
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.41
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.41
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.39
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.42
132 TestFunctional/parallel/ImageCommands/ImageBuild 6.43
133 TestFunctional/parallel/ImageCommands/Setup 0.43
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.88
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.37
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.8
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.48
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/delete_echo-server_images 0.03
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 96.05
162 TestMultiControlPlane/serial/DeployApp 5.15
163 TestMultiControlPlane/serial/PingHostFromPods 1.07
164 TestMultiControlPlane/serial/AddWorkerNode 31.79
165 TestMultiControlPlane/serial/NodeLabels 0.06
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
167 TestMultiControlPlane/serial/CopyFile 15.57
168 TestMultiControlPlane/serial/StopSecondaryNode 12.45
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.65
170 TestMultiControlPlane/serial/RestartSecondaryNode 22.95
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 195.25
173 TestMultiControlPlane/serial/DeleteSecondaryNode 12.07
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
175 TestMultiControlPlane/serial/StopCluster 35.44
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
178 TestMultiControlPlane/serial/AddSecondaryNode 40.2
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
183 TestJSONOutput/start/Command 39.39
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.66
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.57
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.74
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
208 TestKicCustomNetwork/create_custom_network 30.16
209 TestKicCustomNetwork/use_default_bridge_network 22.3
210 TestKicExistingNetwork 22.22
211 TestKicCustomSubnet 23.72
212 TestKicStaticIP 25.72
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 48.79
217 TestMountStart/serial/StartWithMountFirst 5.37
218 TestMountStart/serial/VerifyMountFirst 0.24
219 TestMountStart/serial/StartWithMountSecond 5.46
220 TestMountStart/serial/VerifyMountSecond 0.25
221 TestMountStart/serial/DeleteFirst 1.6
222 TestMountStart/serial/VerifyMountPostDelete 0.24
223 TestMountStart/serial/Stop 1.17
224 TestMountStart/serial/RestartStopped 7.03
225 TestMountStart/serial/VerifyMountPostStop 0.24
228 TestMultiNode/serial/FreshStart2Nodes 66.61
229 TestMultiNode/serial/DeployApp2Nodes 4.21
230 TestMultiNode/serial/PingHostFrom2Pods 0.72
231 TestMultiNode/serial/AddNode 26.13
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.59
234 TestMultiNode/serial/CopyFile 8.8
235 TestMultiNode/serial/StopNode 2.06
236 TestMultiNode/serial/StartAfterStop 8.68
237 TestMultiNode/serial/RestartKeepsNodes 93.84
238 TestMultiNode/serial/DeleteNode 5.21
239 TestMultiNode/serial/StopMultiNode 23.64
240 TestMultiNode/serial/RestartMultiNode 47.81
241 TestMultiNode/serial/ValidateNameConflict 25.39
246 TestPreload 102.9
248 TestScheduledStopUnix 98.73
251 TestInsufficientStorage 9.87
252 TestRunningBinaryUpgrade 113.99
254 TestKubernetesUpgrade 341.96
255 TestMissingContainerUpgrade 136.99
256 TestStoppedBinaryUpgrade/Setup 0.58
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
259 TestNoKubernetes/serial/StartWithK8s 28.84
260 TestStoppedBinaryUpgrade/Upgrade 103.58
261 TestNoKubernetes/serial/StartWithStopK8s 10.03
262 TestNoKubernetes/serial/Start 4.55
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
264 TestNoKubernetes/serial/ProfileList 1.33
265 TestNoKubernetes/serial/Stop 6.47
266 TestNoKubernetes/serial/StartNoArgs 7.67
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
276 TestPause/serial/Start 44.31
277 TestStoppedBinaryUpgrade/MinikubeLogs 0.87
278 TestPause/serial/SecondStartNoReconfiguration 33.07
279 TestPause/serial/Pause 0.71
280 TestPause/serial/VerifyStatus 0.31
281 TestPause/serial/Unpause 0.74
282 TestPause/serial/PauseAgain 0.81
283 TestPause/serial/DeletePaused 2.56
284 TestPause/serial/VerifyDeletedResources 14.98
292 TestNetworkPlugins/group/false 4.09
297 TestStartStop/group/old-k8s-version/serial/FirstStart 159.91
299 TestStartStop/group/no-preload/serial/FirstStart 54.61
300 TestStartStop/group/no-preload/serial/DeployApp 8.26
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.84
302 TestStartStop/group/no-preload/serial/Stop 11.86
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
304 TestStartStop/group/no-preload/serial/SecondStart 262.42
305 TestStartStop/group/old-k8s-version/serial/DeployApp 9.41
306 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.79
307 TestStartStop/group/old-k8s-version/serial/Stop 12.05
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
309 TestStartStop/group/old-k8s-version/serial/SecondStart 138.74
311 TestStartStop/group/embed-certs/serial/FirstStart 44.97
312 TestStartStop/group/embed-certs/serial/DeployApp 9.23
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.81
314 TestStartStop/group/embed-certs/serial/Stop 11.84
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
316 TestStartStop/group/embed-certs/serial/SecondStart 263.45
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.63
319 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.87
321 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.83
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
323 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 273.88
324 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
326 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
327 TestStartStop/group/old-k8s-version/serial/Pause 2.69
329 TestStartStop/group/newest-cni/serial/FirstStart 28.3
330 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
332 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
333 TestStartStop/group/no-preload/serial/Pause 2.79
334 TestNetworkPlugins/group/auto/Start 41.33
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.34
337 TestStartStop/group/newest-cni/serial/Stop 1.21
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
339 TestStartStop/group/newest-cni/serial/SecondStart 13.42
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
343 TestStartStop/group/newest-cni/serial/Pause 2.69
344 TestNetworkPlugins/group/kindnet/Start 39.66
345 TestNetworkPlugins/group/auto/KubeletFlags 0.29
346 TestNetworkPlugins/group/auto/NetCatPod 10.2
347 TestNetworkPlugins/group/auto/DNS 0.13
348 TestNetworkPlugins/group/auto/Localhost 0.12
349 TestNetworkPlugins/group/auto/HairPin 0.19
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestNetworkPlugins/group/calico/Start 56.75
352 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
353 TestNetworkPlugins/group/kindnet/NetCatPod 10.2
354 TestNetworkPlugins/group/kindnet/DNS 0.12
355 TestNetworkPlugins/group/kindnet/Localhost 0.12
356 TestNetworkPlugins/group/kindnet/HairPin 0.1
357 TestNetworkPlugins/group/custom-flannel/Start 47.74
358 TestNetworkPlugins/group/calico/ControllerPod 6.01
359 TestNetworkPlugins/group/calico/KubeletFlags 0.26
360 TestNetworkPlugins/group/calico/NetCatPod 10.16
361 TestNetworkPlugins/group/calico/DNS 0.12
362 TestNetworkPlugins/group/calico/Localhost 0.1
363 TestNetworkPlugins/group/calico/HairPin 0.1
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.2
366 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
367 TestNetworkPlugins/group/custom-flannel/DNS 0.13
368 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
369 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
370 TestNetworkPlugins/group/flannel/Start 44.21
371 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
372 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
373 TestStartStop/group/embed-certs/serial/Pause 2.94
374 TestNetworkPlugins/group/bridge/Start 63.74
375 TestNetworkPlugins/group/enable-default-cni/Start 64.11
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
378 TestNetworkPlugins/group/flannel/NetCatPod 10.17
379 TestNetworkPlugins/group/flannel/DNS 0.16
380 TestNetworkPlugins/group/flannel/Localhost 0.1
381 TestNetworkPlugins/group/flannel/HairPin 0.1
382 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
384 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
385 TestNetworkPlugins/group/bridge/NetCatPod 9.2
386 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
387 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.6
388 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
389 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.22
390 TestNetworkPlugins/group/bridge/DNS 0.14
391 TestNetworkPlugins/group/bridge/Localhost 0.11
392 TestNetworkPlugins/group/bridge/HairPin 0.1
393 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
394 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
395 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (5.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-874158 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-874158 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.721295781s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1205 19:03:10.461594 1006315 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1205 19:03:10.461692 1006315 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-874158
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-874158: exit status 85 (63.076062ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-874158 | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC |          |
	|         | -p download-only-874158        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:03:04
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:03:04.784263 1006327 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:03:04.784530 1006327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:03:04.784542 1006327 out.go:358] Setting ErrFile to fd 2...
	I1205 19:03:04.784546 1006327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:03:04.784731 1006327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
	W1205 19:03:04.784850 1006327 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20052-999445/.minikube/config/config.json: open /home/jenkins/minikube-integration/20052-999445/.minikube/config/config.json: no such file or directory
	I1205 19:03:04.785423 1006327 out.go:352] Setting JSON to true
	I1205 19:03:04.786413 1006327 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":78336,"bootTime":1733347049,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:03:04.786561 1006327 start.go:139] virtualization: kvm guest
	I1205 19:03:04.789113 1006327 out.go:97] [download-only-874158] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1205 19:03:04.789218 1006327 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 19:03:04.789287 1006327 notify.go:220] Checking for updates...
	I1205 19:03:04.790631 1006327 out.go:169] MINIKUBE_LOCATION=20052
	I1205 19:03:04.792253 1006327 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:03:04.793739 1006327 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig
	I1205 19:03:04.794921 1006327 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube
	I1205 19:03:04.796029 1006327 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 19:03:04.798092 1006327 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 19:03:04.798287 1006327 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:03:04.820187 1006327 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 19:03:04.820276 1006327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:03:05.154372 1006327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-05 19:03:05.145370524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:03:05.154493 1006327 docker.go:318] overlay module found
	I1205 19:03:05.156194 1006327 out.go:97] Using the docker driver based on user configuration
	I1205 19:03:05.156215 1006327 start.go:297] selected driver: docker
	I1205 19:03:05.156223 1006327 start.go:901] validating driver "docker" against <nil>
	I1205 19:03:05.156320 1006327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:03:05.201321 1006327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-05 19:03:05.192820103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:03:05.201517 1006327 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 19:03:05.202077 1006327 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1205 19:03:05.202270 1006327 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 19:03:05.204069 1006327 out.go:169] Using Docker driver with root privileges
	I1205 19:03:05.205195 1006327 cni.go:84] Creating CNI manager for ""
	I1205 19:03:05.205268 1006327 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:03:05.205284 1006327 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:03:05.205366 1006327 start.go:340] cluster config:
	{Name:download-only-874158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-874158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:03:05.206687 1006327 out.go:97] Starting "download-only-874158" primary control-plane node in "download-only-874158" cluster
	I1205 19:03:05.206706 1006327 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:03:05.207820 1006327 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1205 19:03:05.207857 1006327 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 19:03:05.207894 1006327 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1205 19:03:05.223052 1006327 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1205 19:03:05.223889 1006327 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1205 19:03:05.223971 1006327 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1205 19:03:05.233099 1006327 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 19:03:05.233123 1006327 cache.go:56] Caching tarball of preloaded images
	I1205 19:03:05.233243 1006327 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 19:03:05.234754 1006327 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1205 19:03:05.234774 1006327 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:03:05.260622 1006327 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 19:03:08.995720 1006327 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:03:08.995821 1006327 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-874158 host does not exist
	  To start a cluster, run: "minikube start -p download-only-874158"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-874158
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (5.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-589784 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-589784 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.462920065s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (5.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1205 19:03:16.315853 1006315 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1205 19:03:16.315897 1006315 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-589784
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-589784: exit status 85 (261.359621ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-874158 | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC |                     |
	|         | -p download-only-874158        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC | 05 Dec 24 19:03 UTC |
	| delete  | -p download-only-874158        | download-only-874158 | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC | 05 Dec 24 19:03 UTC |
	| start   | -o=json --download-only        | download-only-589784 | jenkins | v1.34.0 | 05 Dec 24 19:03 UTC |                     |
	|         | -p download-only-589784        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:03:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:03:10.897172 1006690 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:03:10.897304 1006690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:03:10.897313 1006690 out.go:358] Setting ErrFile to fd 2...
	I1205 19:03:10.897318 1006690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:03:10.897533 1006690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
	I1205 19:03:10.898145 1006690 out.go:352] Setting JSON to true
	I1205 19:03:10.899044 1006690 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":78342,"bootTime":1733347049,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:03:10.899163 1006690 start.go:139] virtualization: kvm guest
	I1205 19:03:10.900976 1006690 out.go:97] [download-only-589784] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:03:10.901153 1006690 notify.go:220] Checking for updates...
	I1205 19:03:10.902380 1006690 out.go:169] MINIKUBE_LOCATION=20052
	I1205 19:03:10.903720 1006690 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:03:10.904901 1006690 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig
	I1205 19:03:10.905930 1006690 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube
	I1205 19:03:10.907015 1006690 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 19:03:10.908879 1006690 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 19:03:10.909083 1006690 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:03:10.931231 1006690 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 19:03:10.931363 1006690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:03:10.979960 1006690 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-05 19:03:10.970646548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:03:10.980070 1006690 docker.go:318] overlay module found
	I1205 19:03:10.981682 1006690 out.go:97] Using the docker driver based on user configuration
	I1205 19:03:10.981712 1006690 start.go:297] selected driver: docker
	I1205 19:03:10.981724 1006690 start.go:901] validating driver "docker" against <nil>
	I1205 19:03:10.981817 1006690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:03:11.029425 1006690 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-05 19:03:11.021377776 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:03:11.029607 1006690 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 19:03:11.030168 1006690 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1205 19:03:11.030338 1006690 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 19:03:11.032001 1006690 out.go:169] Using Docker driver with root privileges
	I1205 19:03:11.033135 1006690 cni.go:84] Creating CNI manager for ""
	I1205 19:03:11.033194 1006690 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:03:11.033206 1006690 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:03:11.033263 1006690 start.go:340] cluster config:
	{Name:download-only-589784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-589784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:03:11.034444 1006690 out.go:97] Starting "download-only-589784" primary control-plane node in "download-only-589784" cluster
	I1205 19:03:11.034462 1006690 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:03:11.035445 1006690 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1205 19:03:11.035463 1006690 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:03:11.035573 1006690 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1205 19:03:11.050987 1006690 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1205 19:03:11.051123 1006690 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1205 19:03:11.051143 1006690 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1205 19:03:11.051148 1006690 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1205 19:03:11.051158 1006690 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1205 19:03:11.073103 1006690 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:03:11.073158 1006690 cache.go:56] Caching tarball of preloaded images
	I1205 19:03:11.073296 1006690 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:03:11.074870 1006690 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1205 19:03:11.074889 1006690 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:03:11.098918 1006690 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:03:14.778072 1006690 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:03:14.778190 1006690 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20052-999445/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:03:15.558624 1006690 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:03:15.559020 1006690 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/download-only-589784/config.json ...
	I1205 19:03:15.559056 1006690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/download-only-589784/config.json: {Name:mk24872a0b593179ed1b9ae1b08ff27d3efd2ad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:15.559276 1006690 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:03:15.560084 1006690 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20052-999445/.minikube/cache/linux/amd64/v1.31.2/kubectl
	
	
	* The control-plane node download-only-589784 host does not exist
	  To start a cluster, run: "minikube start -p download-only-589784"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-589784
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.1s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-901904 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-901904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-901904
--- PASS: TestDownloadOnlyKic (1.10s)

                                                
                                    
x
+
TestBinaryMirror (0.77s)

                                                
                                                
=== RUN   TestBinaryMirror
I1205 19:03:18.492352 1006315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-857636 --alsologtostderr --binary-mirror http://127.0.0.1:35259 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-857636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-857636
--- PASS: TestBinaryMirror (0.77s)

                                                
                                    
x
+
TestOffline (52.04s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-634225 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-634225 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (48.999316518s)
helpers_test.go:175: Cleaning up "offline-crio-634225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-634225
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-634225: (3.039465029s)
--- PASS: TestOffline (52.04s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-792804
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-792804: exit status 85 (56.667953ms)

                                                
                                                
-- stdout --
	* Profile "addons-792804" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-792804"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-792804
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-792804: exit status 85 (56.143885ms)

                                                
                                                
-- stdout --
	* Profile "addons-792804" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-792804"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (143.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-792804 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-792804 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m23.521712836s)
--- PASS: TestAddons/Setup (143.52s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-792804 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-792804 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-792804 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-792804 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7e2f726d-bf6e-485f-a105-edbc97d9c50d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7e2f726d-bf6e-485f-a105-edbc97d9c50d] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003717103s
addons_test.go:633: (dbg) Run:  kubectl --context addons-792804 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-792804 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-792804 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.953992ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-qh8j2" [4ed56af8-db58-447f-b533-cc510548cf01] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002899421s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5jm2x" [3066695f-7cd9-404c-b980-d75b005c5b47] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003554928s
addons_test.go:331: (dbg) Run:  kubectl --context addons-792804 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-792804 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-792804 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.385497824s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 ip
2024/12/05 19:06:15 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-j7qx8" [4685c130-d70f-49d3-8e60-d1177bb5d1de] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004178032s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-792804 addons disable inspektor-gadget --alsologtostderr -v=1: (5.638313972s)
--- PASS: TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.15s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1205 19:06:17.057326 1006315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1205 19:06:17.072829 1006315 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1205 19:06:17.072855 1006315 kapi.go:107] duration metric: took 15.541839ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 15.553185ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-792804 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-792804 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1e19bb93-2eb1-46b9-b43b-10cc0d5324ab] Pending
helpers_test.go:344: "task-pv-pod" [1e19bb93-2eb1-46b9-b43b-10cc0d5324ab] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1e19bb93-2eb1-46b9-b43b-10cc0d5324ab] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003435101s
addons_test.go:511: (dbg) Run:  kubectl --context addons-792804 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-792804 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-792804 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-792804 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-792804 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-792804 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-792804 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2d081a67-f0f9-4bf6-8141-e273c6dcc333] Pending
helpers_test.go:344: "task-pv-pod-restore" [2d081a67-f0f9-4bf6-8141-e273c6dcc333] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2d081a67-f0f9-4bf6-8141-e273c6dcc333] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003585413s
addons_test.go:553: (dbg) Run:  kubectl --context addons-792804 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-792804 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-792804 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-792804 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.495789942s)
--- PASS: TestAddons/parallel/CSI (47.15s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-792804 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-b6b45" [a407824d-b43e-4b2a-a82f-f8b9092cd107] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-b6b45" [a407824d-b43e-4b2a-a82f-f8b9092cd107] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-b6b45" [a407824d-b43e-4b2a-a82f-f8b9092cd107] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004801329s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-792804 addons disable headlamp --alsologtostderr -v=1: (6.048913252s)
--- PASS: TestAddons/parallel/Headlamp (16.77s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-ws4s9" [e333b8dd-e7c4-4204-b6ed-10687ba7e18d] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.002999704s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.45s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.92s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-792804 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-792804 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792804 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7bcf1e0c-d13e-4bcd-b12d-54764157c49a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7bcf1e0c-d13e-4bcd-b12d-54764157c49a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7bcf1e0c-d13e-4bcd-b12d-54764157c49a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003464595s
addons_test.go:906: (dbg) Run:  kubectl --context addons-792804 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 ssh "cat /opt/local-path-provisioner/pvc-fdbafd17-1365-40d4-95e6-83d3408a157a_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-792804 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-792804 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-792804 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.953014809s)
--- PASS: TestAddons/parallel/LocalPath (51.92s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-plx8r" [7f230e91-1177-4780-b554-91b9244f8abe] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004807141s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-wrbwl" [42c7e93e-0b31-4707-95e9-1f0ddcc9f67d] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003883473s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-792804 addons disable yakd --alsologtostderr -v=1: (5.729593828s)
--- PASS: TestAddons/parallel/Yakd (10.73s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-rkfpl" [84620fa8-2414-4aee-997e-77166e219e34] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.004349868s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.08s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-792804
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-792804: (11.834089449s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-792804
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-792804
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-792804
--- PASS: TestAddons/StoppedEnableDisable (12.08s)

                                                
                                    
x
+
TestCertOptions (29.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-186553 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-186553 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.634080298s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-186553 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-186553 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-186553 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-186553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-186553
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-186553: (3.818329243s)
--- PASS: TestCertOptions (29.05s)

                                                
                                    
x
+
TestCertExpiration (216.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-138273 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-138273 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (21.10192738s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-138273 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1205 19:44:42.988518 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-138273 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (12.750760868s)
helpers_test.go:175: Cleaning up "cert-expiration-138273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-138273
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-138273: (2.277899811s)
--- PASS: TestCertExpiration (216.13s)

                                                
                                    
x
+
TestForceSystemdFlag (25.99s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-144413 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-144413 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.031155655s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-144413 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-144413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-144413
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-144413: (2.669998575s)
--- PASS: TestForceSystemdFlag (25.99s)

                                                
                                    
x
+
TestForceSystemdEnv (28.86s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-532867 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-532867 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.089149472s)
helpers_test.go:175: Cleaning up "force-systemd-env-532867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-532867
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-532867: (4.76567772s)
--- PASS: TestForceSystemdEnv (28.86s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.81s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1205 19:41:19.883865 1006315 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 19:41:19.884016 1006315 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1205 19:41:19.918734 1006315 install.go:62] docker-machine-driver-kvm2: exit status 1
W1205 19:41:19.919169 1006315 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1205 19:41:19.919246 1006315 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3744952751/001/docker-machine-driver-kvm2
I1205 19:41:20.056891 1006315 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3744952751/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020] Decompressors:map[bz2:0xc000015ce0 gz:0xc000015ce8 tar:0xc000015a90 tar.bz2:0xc000015ae0 tar.gz:0xc000015b20 tar.xz:0xc000015b30 tar.zst:0xc000015c90 tbz2:0xc000015ae0 tgz:0xc000015b20 txz:0xc000015b30 tzst:0xc000015c90 xz:0xc000015cf0 zip:0xc000015d00 zst:0xc000015cf8] Getters:map[file:0xc001db68c0 http:0xc00077ca50 https:0xc00077caa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1205 19:41:20.056938 1006315 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3744952751/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.81s)

                                                
                                    
x
+
TestErrorSpam/setup (20.13s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-782315 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-782315 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-782315 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-782315 --driver=docker  --container-runtime=crio: (20.128975648s)
--- PASS: TestErrorSpam/setup (20.13s)

                                                
                                    
x
+
TestErrorSpam/start (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 start --dry-run
--- PASS: TestErrorSpam/start (0.57s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 stop: (1.167846035s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782315 --log_dir /tmp/nospam-782315 stop
--- PASS: TestErrorSpam/stop (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20052-999445/.minikube/files/etc/test/nested/copy/1006315/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-418616 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-418616 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (42.026359172s)
--- PASS: TestFunctional/serial/StartWithProxy (42.03s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1205 19:13:23.369705 1006315 config.go:182] Loaded profile config "functional-418616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-418616 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-418616 --alsologtostderr -v=8: (27.026625488s)
functional_test.go:663: soft start took 27.02748584s for "functional-418616" cluster.
I1205 19:13:50.396756 1006315 config.go:182] Loaded profile config "functional-418616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (27.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-418616 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-418616 cache add registry.k8s.io/pause:3.3: (1.081409016s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-418616 /tmp/TestFunctionalserialCacheCmdcacheadd_local1708463274/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 cache add minikube-local-cache-test:functional-418616
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 cache delete minikube-local-cache-test:functional-418616
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-418616
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-418616 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (263.957505ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 kubectl -- --context functional-418616 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-418616 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-418616 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-418616 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.998582462s)
functional_test.go:761: restart took 38.998704572s for "functional-418616" cluster.
I1205 19:14:35.797768 1006315 config.go:182] Loaded profile config "functional-418616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (39.00s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-418616 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-418616 logs: (1.30155486s)
--- PASS: TestFunctional/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 logs --file /tmp/TestFunctionalserialLogsFileCmd468362740/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-418616 logs --file /tmp/TestFunctionalserialLogsFileCmd468362740/001/logs.txt: (1.311927937s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.51s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-418616 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-418616
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-418616: exit status 115 (501.331981ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30816 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-418616 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-418616 config get cpus: exit status 14 (71.178336ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-418616 config get cpus: exit status 14 (59.736066ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-418616 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-418616 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1043592: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-418616 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-418616 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (170.490261ms)

                                                
                                                
-- stdout --
	* [functional-418616] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:14:46.135371 1043024 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:14:46.135509 1043024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:14:46.135523 1043024 out.go:358] Setting ErrFile to fd 2...
	I1205 19:14:46.135531 1043024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:14:46.135808 1043024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
	I1205 19:14:46.136477 1043024 out.go:352] Setting JSON to false
	I1205 19:14:46.137732 1043024 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":79037,"bootTime":1733347049,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:14:46.137873 1043024 start.go:139] virtualization: kvm guest
	I1205 19:14:46.142702 1043024 out.go:177] * [functional-418616] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:14:46.144065 1043024 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:14:46.144128 1043024 notify.go:220] Checking for updates...
	I1205 19:14:46.146376 1043024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:14:46.147559 1043024 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig
	I1205 19:14:46.148863 1043024 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube
	I1205 19:14:46.150137 1043024 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:14:46.151488 1043024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:14:46.152957 1043024 config.go:182] Loaded profile config "functional-418616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:14:46.153560 1043024 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:14:46.181904 1043024 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 19:14:46.181986 1043024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:14:46.239123 1043024 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-05 19:14:46.229674153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:14:46.239281 1043024 docker.go:318] overlay module found
	I1205 19:14:46.241198 1043024 out.go:177] * Using the docker driver based on existing profile
	I1205 19:14:46.242466 1043024 start.go:297] selected driver: docker
	I1205 19:14:46.242483 1043024 start.go:901] validating driver "docker" against &{Name:functional-418616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-418616 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:14:46.242608 1043024 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:14:46.244655 1043024 out.go:201] 
	W1205 19:14:46.245635 1043024 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 19:14:46.246747 1043024 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-418616 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-418616 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-418616 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (167.522688ms)

                                                
                                                
-- stdout --
	* [functional-418616] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:14:45.973426 1042883 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:14:45.973616 1042883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:14:45.973630 1042883 out.go:358] Setting ErrFile to fd 2...
	I1205 19:14:45.973638 1042883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:14:45.973983 1042883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
	I1205 19:14:45.974752 1042883 out.go:352] Setting JSON to false
	I1205 19:14:45.976209 1042883 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":79037,"bootTime":1733347049,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:14:45.976469 1042883 start.go:139] virtualization: kvm guest
	I1205 19:14:45.978707 1042883 out.go:177] * [functional-418616] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1205 19:14:45.980081 1042883 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:14:45.980129 1042883 notify.go:220] Checking for updates...
	I1205 19:14:45.983091 1042883 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:14:45.984372 1042883 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig
	I1205 19:14:45.985591 1042883 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube
	I1205 19:14:45.986734 1042883 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:14:45.987802 1042883 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:14:45.989419 1042883 config.go:182] Loaded profile config "functional-418616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:14:45.989947 1042883 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:14:46.014962 1042883 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 19:14:46.015061 1042883 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:14:46.067699 1042883 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-05 19:14:46.058114053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:14:46.067801 1042883 docker.go:318] overlay module found
	I1205 19:14:46.070203 1042883 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1205 19:14:46.071226 1042883 start.go:297] selected driver: docker
	I1205 19:14:46.071243 1042883 start.go:901] validating driver "docker" against &{Name:functional-418616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-418616 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:14:46.071368 1042883 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:14:46.073384 1042883 out.go:201] 
	W1205 19:14:46.074590 1042883 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 19:14:46.075682 1042883 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-418616 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-418616 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-kvtnj" [561d844e-31e9-43da-b3e3-e90d165adf05] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-kvtnj" [561d844e-31e9-43da-b3e3-e90d165adf05] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.040063413s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30998
functional_test.go:1675: http://192.168.49.2:30998: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-kvtnj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30998
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.01s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4dfb6136-f8b1-413c-a9df-99c0c2073ab8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003532531s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-418616 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-418616 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-418616 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-418616 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bb9bcd3e-b6e0-4aeb-b9dd-42bd235a3687] Pending
helpers_test.go:344: "sp-pod" [bb9bcd3e-b6e0-4aeb-b9dd-42bd235a3687] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bb9bcd3e-b6e0-4aeb-b9dd-42bd235a3687] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.075290132s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-418616 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-418616 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-418616 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [04be6ed8-61ae-4a50-a771-fa744c7354dc] Pending
helpers_test.go:344: "sp-pod" [04be6ed8-61ae-4a50-a771-fa744c7354dc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [04be6ed8-61ae-4a50-a771-fa744c7354dc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003393119s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-418616 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.40s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh -n functional-418616 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 cp functional-418616:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1603429095/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh -n functional-418616 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh -n functional-418616 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (18.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-418616 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-79fk4" [f3ebabf0-b9d7-42ca-9ce9-c52ae3807f1c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-79fk4" [f3ebabf0-b9d7-42ca-9ce9-c52ae3807f1c] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.004146523s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-418616 exec mysql-6cdb49bbb-79fk4 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-418616 exec mysql-6cdb49bbb-79fk4 -- mysql -ppassword -e "show databases;": exit status 1 (103.7988ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 19:15:21.483564 1006315 retry.go:31] will retry after 581.041757ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-418616 exec mysql-6cdb49bbb-79fk4 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-418616 exec mysql-6cdb49bbb-79fk4 -- mysql -ppassword -e "show databases;": exit status 1 (120.620019ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 19:15:22.186162 1006315 retry.go:31] will retry after 1.320421466s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-418616 exec mysql-6cdb49bbb-79fk4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (18.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1006315/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "sudo cat /etc/test/nested/copy/1006315/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1006315.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "sudo cat /etc/ssl/certs/1006315.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1006315.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "sudo cat /usr/share/ca-certificates/1006315.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/10063152.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "sudo cat /etc/ssl/certs/10063152.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/10063152.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "sudo cat /usr/share/ca-certificates/10063152.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-418616 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-418616 ssh "sudo systemctl is-active docker": exit status 1 (327.976105ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-418616 ssh "sudo systemctl is-active containerd": exit status 1 (387.390192ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-418616 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-418616 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-59v27" [881f0ff7-6dd1-4768-9d35-018e0e3830e4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-59v27" [881f0ff7-6dd1-4768-9d35-018e0e3830e4] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.018451908s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "468.056819ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "57.688286ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "441.225381ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "49.20235ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-418616 /tmp/TestFunctionalparallelMountCmdany-port1398248070/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733426084768517598" to /tmp/TestFunctionalparallelMountCmdany-port1398248070/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733426084768517598" to /tmp/TestFunctionalparallelMountCmdany-port1398248070/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733426084768517598" to /tmp/TestFunctionalparallelMountCmdany-port1398248070/001/test-1733426084768517598
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-418616 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (421.079882ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 19:14:45.189898 1006315 retry.go:31] will retry after 388.405492ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  5 19:14 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  5 19:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  5 19:14 test-1733426084768517598
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh cat /mount-9p/test-1733426084768517598
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-418616 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b268c413-a91d-4ddf-bd34-6f8a718f11fd] Pending
helpers_test.go:344: "busybox-mount" [b268c413-a91d-4ddf-bd34-6f8a718f11fd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b268c413-a91d-4ddf-bd34-6f8a718f11fd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b268c413-a91d-4ddf-bd34-6f8a718f11fd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004155646s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-418616 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-418616 /tmp/TestFunctionalparallelMountCmdany-port1398248070/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-418616 /tmp/TestFunctionalparallelMountCmdspecific-port3901608158/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-418616 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (326.074914ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 19:14:52.934159 1006315 retry.go:31] will retry after 352.119998ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-418616 /tmp/TestFunctionalparallelMountCmdspecific-port3901608158/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-418616 ssh "sudo umount -f /mount-9p": exit status 1 (258.728451ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-418616 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-418616 /tmp/TestFunctionalparallelMountCmdspecific-port3901608158/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 service list -o json
functional_test.go:1494: Took "511.063784ms" to run "out/minikube-linux-amd64 -p functional-418616 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30832
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-418616 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1372698545/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-418616 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1372698545/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-418616 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1372698545/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-418616 ssh "findmnt -T" /mount1: exit status 1 (358.973098ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 19:14:54.687266 1006315 retry.go:31] will retry after 562.428311ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-418616 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-418616 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1372698545/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-418616 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1372698545/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-418616 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1372698545/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 service hello-node --url --format={{.IP}}
2024/12/05 19:14:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30832
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-amd64 -p functional-418616 version -o=json --components: (1.042819422s)
--- PASS: TestFunctional/parallel/Version/components (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-418616 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-418616
localhost/kicbase/echo-server:functional-418616
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241023-a345ebe4
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-418616 image ls --format short --alsologtostderr:
I1205 19:15:09.138782 1049262 out.go:345] Setting OutFile to fd 1 ...
I1205 19:15:09.139049 1049262 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:15:09.139059 1049262 out.go:358] Setting ErrFile to fd 2...
I1205 19:15:09.139064 1049262 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:15:09.139283 1049262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
I1205 19:15:09.139884 1049262 config.go:182] Loaded profile config "functional-418616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:15:09.139985 1049262 config.go:182] Loaded profile config "functional-418616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:15:09.140367 1049262 cli_runner.go:164] Run: docker container inspect functional-418616 --format={{.State.Status}}
I1205 19:15:09.156636 1049262 ssh_runner.go:195] Run: systemctl --version
I1205 19:15:09.156680 1049262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-418616
I1205 19:15:09.172568 1049262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/functional-418616/id_rsa Username:docker}
I1205 19:15:09.378973 1049262 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-418616 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| docker.io/library/nginx                 | alpine             | 91ca84b4f5779 | 54MB   |
| localhost/minikube-local-cache-test     | functional-418616  | bac45dbc6954a | 3.33kB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20241023-a345ebe4 | 9ca7e41918271 | 95MB   |
| docker.io/library/nginx                 | latest             | 66f8bdd3810c9 | 196MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| localhost/kicbase/echo-server           | functional-418616  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-418616 image ls --format table --alsologtostderr:
I1205 19:15:10.049828 1049541 out.go:345] Setting OutFile to fd 1 ...
I1205 19:15:10.049973 1049541 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:15:10.049984 1049541 out.go:358] Setting ErrFile to fd 2...
I1205 19:15:10.050008 1049541 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:15:10.050197 1049541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
I1205 19:15:10.050765 1049541 config.go:182] Loaded profile config "functional-418616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:15:10.050887 1049541 config.go:182] Loaded profile config "functional-418616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:15:10.051292 1049541 cli_runner.go:164] Run: docker container inspect functional-418616 --format={{.State.Status}}
I1205 19:15:10.067851 1049541 ssh_runner.go:195] Run: systemctl --version
I1205 19:15:10.067892 1049541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-418616
I1205 19:15:10.090351 1049541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/functional-418616/id_rsa Username:docker}
I1205 19:15:10.279100 1049541 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-418616 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561
605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":
["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5","repoDigests":["docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16","docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"],"repoTags":["docker.io/kindest/kindnetd:v20241023-a345ebe4"],"size":"94958644"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"847c7bc1a541865e150af08318f49d02d
0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66","repoDigests":["docker.io/li
brary/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4","docker.io/library/nginx@sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371"],"repoTags":["docker.io/library/nginx:alpine"],"size":"53958631"},{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"195919252"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899
ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-418616"],"size":"4943877"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"bac45dbc6954a9c23620fb32c4973d1
c9ca77e037d7805b8ae9af747a2bd0e74","repoDigests":["localhost/minikube-local-cache-test@sha256:e7ebf9d0912587d353527ff067d39ec0e577384a22ced966ba4ed9325feea07c"],"repoTags":["localhost/minikube-local-cache-test:functional-418616"],"size":"3330"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-418616 image ls --format json --alsologtostderr:
I1205 19:15:09.665034 1049429 out.go:345] Setting OutFile to fd 1 ...
I1205 19:15:09.665139 1049429 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:15:09.665147 1049429 out.go:358] Setting ErrFile to fd 2...
I1205 19:15:09.665152 1049429 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:15:09.665345 1049429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
I1205 19:15:09.665943 1049429 config.go:182] Loaded profile config "functional-418616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:15:09.666087 1049429 config.go:182] Loaded profile config "functional-418616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:15:09.666486 1049429 cli_runner.go:164] Run: docker container inspect functional-418616 --format={{.State.Status}}
I1205 19:15:09.689308 1049429 ssh_runner.go:195] Run: systemctl --version
I1205 19:15:09.689374 1049429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-418616
I1205 19:15:09.711481 1049429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/functional-418616/id_rsa Username:docker}
I1205 19:15:09.878758 1049429 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-418616 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5
repoDigests:
- docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16
- docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d
repoTags:
- docker.io/kindest/kindnetd:v20241023-a345ebe4
size: "94958644"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "195919252"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: bac45dbc6954a9c23620fb32c4973d1c9ca77e037d7805b8ae9af747a2bd0e74
repoDigests:
- localhost/minikube-local-cache-test@sha256:e7ebf9d0912587d353527ff067d39ec0e577384a22ced966ba4ed9325feea07c
repoTags:
- localhost/minikube-local-cache-test:functional-418616
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66
repoDigests:
- docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4
- docker.io/library/nginx@sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371
repoTags:
- docker.io/library/nginx:alpine
size: "53958631"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-418616
size: "4943877"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-418616 image ls --format yaml --alsologtostderr:
I1205 19:15:09.244746 1049307 out.go:345] Setting OutFile to fd 1 ...
I1205 19:15:09.244848 1049307 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:15:09.244856 1049307 out.go:358] Setting ErrFile to fd 2...
I1205 19:15:09.244859 1049307 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:15:09.245038 1049307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
I1205 19:15:09.245620 1049307 config.go:182] Loaded profile config "functional-418616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:15:09.245718 1049307 config.go:182] Loaded profile config "functional-418616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:15:09.246153 1049307 cli_runner.go:164] Run: docker container inspect functional-418616 --format={{.State.Status}}
I1205 19:15:09.264034 1049307 ssh_runner.go:195] Run: systemctl --version
I1205 19:15:09.264077 1049307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-418616
I1205 19:15:09.283775 1049307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/functional-418616/id_rsa Username:docker}
I1205 19:15:09.430314 1049307 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-418616 ssh pgrep buildkitd: exit status 1 (390.725292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image build -t localhost/my-image:functional-418616 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-418616 image build -t localhost/my-image:functional-418616 testdata/build --alsologtostderr: (5.825144505s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-418616 image build -t localhost/my-image:functional-418616 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e7b00ebbf42
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-418616
--> 0e1a4bc3564
Successfully tagged localhost/my-image:functional-418616
0e1a4bc3564f91b8cc867e71b09ce9380ff874781b01a79e6f8dd669cf96a9c9
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-418616 image build -t localhost/my-image:functional-418616 testdata/build --alsologtostderr:
I1205 19:15:09.937629 1049498 out.go:345] Setting OutFile to fd 1 ...
I1205 19:15:09.938450 1049498 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:15:09.938464 1049498 out.go:358] Setting ErrFile to fd 2...
I1205 19:15:09.938471 1049498 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:15:09.938687 1049498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
I1205 19:15:09.939270 1049498 config.go:182] Loaded profile config "functional-418616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:15:09.939809 1049498 config.go:182] Loaded profile config "functional-418616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:15:09.940217 1049498 cli_runner.go:164] Run: docker container inspect functional-418616 --format={{.State.Status}}
I1205 19:15:09.956398 1049498 ssh_runner.go:195] Run: systemctl --version
I1205 19:15:09.956457 1049498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-418616
I1205 19:15:09.971969 1049498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/functional-418616/id_rsa Username:docker}
I1205 19:15:10.178973 1049498 build_images.go:161] Building image from path: /tmp/build.3109759450.tar
I1205 19:15:10.179046 1049498 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 19:15:10.189302 1049498 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3109759450.tar
I1205 19:15:10.193307 1049498 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3109759450.tar: stat -c "%s %y" /var/lib/minikube/build/build.3109759450.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3109759450.tar': No such file or directory
I1205 19:15:10.193337 1049498 ssh_runner.go:362] scp /tmp/build.3109759450.tar --> /var/lib/minikube/build/build.3109759450.tar (3072 bytes)
I1205 19:15:10.285331 1049498 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3109759450
I1205 19:15:10.296000 1049498 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3109759450 -xf /var/lib/minikube/build/build.3109759450.tar
I1205 19:15:10.379504 1049498 crio.go:315] Building image: /var/lib/minikube/build/build.3109759450
I1205 19:15:10.379600 1049498 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-418616 /var/lib/minikube/build/build.3109759450 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1205 19:15:15.683764 1049498 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-418616 /var/lib/minikube/build/build.3109759450 --cgroup-manager=cgroupfs: (5.304140087s)
I1205 19:15:15.683822 1049498 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3109759450
I1205 19:15:15.692189 1049498 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3109759450.tar
I1205 19:15:15.699872 1049498 build_images.go:217] Built localhost/my-image:functional-418616 from /tmp/build.3109759450.tar
I1205 19:15:15.699901 1049498 build_images.go:133] succeeded building to: functional-418616
I1205 19:15:15.699905 1049498 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-418616
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image load --daemon kicbase/echo-server:functional-418616 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-418616 image load --daemon kicbase/echo-server:functional-418616 --alsologtostderr: (3.644880859s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-418616 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-418616 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-418616 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1046255: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-418616 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-418616 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-418616 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fd213ece-f70a-45a5-9617-9329ae7a9e03] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fd213ece-f70a-45a5-9617-9329ae7a9e03] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003532532s
I1205 19:15:08.393085 1006315 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image load --daemon kicbase/echo-server:functional-418616 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-418616 image load --daemon kicbase/echo-server:functional-418616 --alsologtostderr: (1.589037632s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-418616
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image load --daemon kicbase/echo-server:functional-418616 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image save kicbase/echo-server:functional-418616 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image rm kicbase/echo-server:functional-418616 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-418616
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-418616 image save --daemon kicbase/echo-server:functional-418616 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-418616
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-418616 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.55.125 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-418616 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-418616
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-418616
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-418616
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (96.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-392363 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1205 19:15:43.412894 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:43.419386 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:43.430728 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:43.452049 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:43.493393 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:43.574838 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:43.736388 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:44.058101 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:44.700140 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:45.981979 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:48.543768 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:53.665165 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:16:03.907260 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:16:24.389313 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-392363 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m35.382942057s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (96.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- rollout status deployment/busybox
E1205 19:17:05.351109 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-392363 -- rollout status deployment/busybox: (3.27608017s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- exec busybox-7dff88458-57mhb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- exec busybox-7dff88458-b7j5c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- exec busybox-7dff88458-d5wq4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- exec busybox-7dff88458-57mhb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- exec busybox-7dff88458-b7j5c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- exec busybox-7dff88458-d5wq4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- exec busybox-7dff88458-57mhb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- exec busybox-7dff88458-b7j5c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- exec busybox-7dff88458-d5wq4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- exec busybox-7dff88458-57mhb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- exec busybox-7dff88458-57mhb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- exec busybox-7dff88458-b7j5c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- exec busybox-7dff88458-b7j5c -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- exec busybox-7dff88458-d5wq4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-392363 -- exec busybox-7dff88458-d5wq4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (31.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-392363 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-392363 -v=7 --alsologtostderr: (30.970045518s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (31.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-392363 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp testdata/cp-test.txt ha-392363:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile343353057/001/cp-test_ha-392363.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363:/home/docker/cp-test.txt ha-392363-m02:/home/docker/cp-test_ha-392363_ha-392363-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m02 "sudo cat /home/docker/cp-test_ha-392363_ha-392363-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363:/home/docker/cp-test.txt ha-392363-m03:/home/docker/cp-test_ha-392363_ha-392363-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m03 "sudo cat /home/docker/cp-test_ha-392363_ha-392363-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363:/home/docker/cp-test.txt ha-392363-m04:/home/docker/cp-test_ha-392363_ha-392363-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m04 "sudo cat /home/docker/cp-test_ha-392363_ha-392363-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp testdata/cp-test.txt ha-392363-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile343353057/001/cp-test_ha-392363-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363-m02:/home/docker/cp-test.txt ha-392363:/home/docker/cp-test_ha-392363-m02_ha-392363.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363 "sudo cat /home/docker/cp-test_ha-392363-m02_ha-392363.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363-m02:/home/docker/cp-test.txt ha-392363-m03:/home/docker/cp-test_ha-392363-m02_ha-392363-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m03 "sudo cat /home/docker/cp-test_ha-392363-m02_ha-392363-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363-m02:/home/docker/cp-test.txt ha-392363-m04:/home/docker/cp-test_ha-392363-m02_ha-392363-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m04 "sudo cat /home/docker/cp-test_ha-392363-m02_ha-392363-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp testdata/cp-test.txt ha-392363-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile343353057/001/cp-test_ha-392363-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363-m03:/home/docker/cp-test.txt ha-392363:/home/docker/cp-test_ha-392363-m03_ha-392363.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363 "sudo cat /home/docker/cp-test_ha-392363-m03_ha-392363.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363-m03:/home/docker/cp-test.txt ha-392363-m02:/home/docker/cp-test_ha-392363-m03_ha-392363-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m02 "sudo cat /home/docker/cp-test_ha-392363-m03_ha-392363-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363-m03:/home/docker/cp-test.txt ha-392363-m04:/home/docker/cp-test_ha-392363-m03_ha-392363-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m04 "sudo cat /home/docker/cp-test_ha-392363-m03_ha-392363-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp testdata/cp-test.txt ha-392363-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile343353057/001/cp-test_ha-392363-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363-m04:/home/docker/cp-test.txt ha-392363:/home/docker/cp-test_ha-392363-m04_ha-392363.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363 "sudo cat /home/docker/cp-test_ha-392363-m04_ha-392363.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363-m04:/home/docker/cp-test.txt ha-392363-m02:/home/docker/cp-test_ha-392363-m04_ha-392363-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m02 "sudo cat /home/docker/cp-test_ha-392363-m04_ha-392363-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 cp ha-392363-m04:/home/docker/cp-test.txt ha-392363-m03:/home/docker/cp-test_ha-392363-m04_ha-392363-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 ssh -n ha-392363-m03 "sudo cat /home/docker/cp-test_ha-392363-m04_ha-392363-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-392363 node stop m02 -v=7 --alsologtostderr: (11.807529376s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-392363 status -v=7 --alsologtostderr: exit status 7 (645.80563ms)

                                                
                                                
-- stdout --
	ha-392363
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-392363-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-392363-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-392363-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:18:11.359497 1071478 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:18:11.359654 1071478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:18:11.359669 1071478 out.go:358] Setting ErrFile to fd 2...
	I1205 19:18:11.359675 1071478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:18:11.359866 1071478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
	I1205 19:18:11.360036 1071478 out.go:352] Setting JSON to false
	I1205 19:18:11.360064 1071478 mustload.go:65] Loading cluster: ha-392363
	I1205 19:18:11.360155 1071478 notify.go:220] Checking for updates...
	I1205 19:18:11.360433 1071478 config.go:182] Loaded profile config "ha-392363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:18:11.360458 1071478 status.go:174] checking status of ha-392363 ...
	I1205 19:18:11.361035 1071478 cli_runner.go:164] Run: docker container inspect ha-392363 --format={{.State.Status}}
	I1205 19:18:11.379095 1071478 status.go:371] ha-392363 host status = "Running" (err=<nil>)
	I1205 19:18:11.379118 1071478 host.go:66] Checking if "ha-392363" exists ...
	I1205 19:18:11.379334 1071478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-392363
	I1205 19:18:11.398224 1071478 host.go:66] Checking if "ha-392363" exists ...
	I1205 19:18:11.398472 1071478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:18:11.398515 1071478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363
	I1205 19:18:11.414838 1071478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363/id_rsa Username:docker}
	I1205 19:18:11.507357 1071478 ssh_runner.go:195] Run: systemctl --version
	I1205 19:18:11.511321 1071478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:18:11.521962 1071478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:18:11.573770 1071478 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-12-05 19:18:11.564806089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:18:11.574544 1071478 kubeconfig.go:125] found "ha-392363" server: "https://192.168.49.254:8443"
	I1205 19:18:11.574579 1071478 api_server.go:166] Checking apiserver status ...
	I1205 19:18:11.574630 1071478 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:18:11.585793 1071478 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup
	I1205 19:18:11.594866 1071478 api_server.go:182] apiserver freezer: "9:freezer:/docker/3f7f53f006e83692f1d3a9e2e86dd9ef17065112b03082d050069a17ac072d47/crio/crio-19090334223dba9588cc1a88ce633bf8ec0216518baf2207a5751133132bbc25"
	I1205 19:18:11.594922 1071478 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3f7f53f006e83692f1d3a9e2e86dd9ef17065112b03082d050069a17ac072d47/crio/crio-19090334223dba9588cc1a88ce633bf8ec0216518baf2207a5751133132bbc25/freezer.state
	I1205 19:18:11.602837 1071478 api_server.go:204] freezer state: "THAWED"
	I1205 19:18:11.602863 1071478 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1205 19:18:11.607770 1071478 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1205 19:18:11.607790 1071478 status.go:463] ha-392363 apiserver status = Running (err=<nil>)
	I1205 19:18:11.607800 1071478 status.go:176] ha-392363 status: &{Name:ha-392363 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 19:18:11.607839 1071478 status.go:174] checking status of ha-392363-m02 ...
	I1205 19:18:11.608093 1071478 cli_runner.go:164] Run: docker container inspect ha-392363-m02 --format={{.State.Status}}
	I1205 19:18:11.624716 1071478 status.go:371] ha-392363-m02 host status = "Stopped" (err=<nil>)
	I1205 19:18:11.624732 1071478 status.go:384] host is not running, skipping remaining checks
	I1205 19:18:11.624738 1071478 status.go:176] ha-392363-m02 status: &{Name:ha-392363-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 19:18:11.624756 1071478 status.go:174] checking status of ha-392363-m03 ...
	I1205 19:18:11.625007 1071478 cli_runner.go:164] Run: docker container inspect ha-392363-m03 --format={{.State.Status}}
	I1205 19:18:11.641351 1071478 status.go:371] ha-392363-m03 host status = "Running" (err=<nil>)
	I1205 19:18:11.641374 1071478 host.go:66] Checking if "ha-392363-m03" exists ...
	I1205 19:18:11.641593 1071478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-392363-m03
	I1205 19:18:11.657685 1071478 host.go:66] Checking if "ha-392363-m03" exists ...
	I1205 19:18:11.657915 1071478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:18:11.657954 1071478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m03
	I1205 19:18:11.674047 1071478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363-m03/id_rsa Username:docker}
	I1205 19:18:11.763187 1071478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:18:11.774074 1071478 kubeconfig.go:125] found "ha-392363" server: "https://192.168.49.254:8443"
	I1205 19:18:11.774101 1071478 api_server.go:166] Checking apiserver status ...
	I1205 19:18:11.774141 1071478 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:18:11.783875 1071478 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1447/cgroup
	I1205 19:18:11.792863 1071478 api_server.go:182] apiserver freezer: "9:freezer:/docker/bd070a02f97f7e2138bc9cb77bb7b37cf0d40dc14eb3794e60f9a4b96a17b5e0/crio/crio-7c3e92f20c3858c1c6c2328928a708ea47d77d11c5f6e72d42da6c73e3fda519"
	I1205 19:18:11.792919 1071478 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bd070a02f97f7e2138bc9cb77bb7b37cf0d40dc14eb3794e60f9a4b96a17b5e0/crio/crio-7c3e92f20c3858c1c6c2328928a708ea47d77d11c5f6e72d42da6c73e3fda519/freezer.state
	I1205 19:18:11.800836 1071478 api_server.go:204] freezer state: "THAWED"
	I1205 19:18:11.800862 1071478 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1205 19:18:11.804597 1071478 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1205 19:18:11.804617 1071478 status.go:463] ha-392363-m03 apiserver status = Running (err=<nil>)
	I1205 19:18:11.804624 1071478 status.go:176] ha-392363-m03 status: &{Name:ha-392363-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 19:18:11.804639 1071478 status.go:174] checking status of ha-392363-m04 ...
	I1205 19:18:11.804862 1071478 cli_runner.go:164] Run: docker container inspect ha-392363-m04 --format={{.State.Status}}
	I1205 19:18:11.821506 1071478 status.go:371] ha-392363-m04 host status = "Running" (err=<nil>)
	I1205 19:18:11.821527 1071478 host.go:66] Checking if "ha-392363-m04" exists ...
	I1205 19:18:11.821796 1071478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-392363-m04
	I1205 19:18:11.837891 1071478 host.go:66] Checking if "ha-392363-m04" exists ...
	I1205 19:18:11.838195 1071478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:18:11.838255 1071478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-392363-m04
	I1205 19:18:11.854734 1071478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/ha-392363-m04/id_rsa Username:docker}
	I1205 19:18:11.943153 1071478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:18:11.954352 1071478 status.go:176] ha-392363-m04 status: &{Name:ha-392363-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 node start m02 -v=7 --alsologtostderr
E1205 19:18:27.273740 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-392363 node start m02 -v=7 --alsologtostderr: (21.805554485s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-392363 status -v=7 --alsologtostderr: (1.073562025s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (195.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-392363 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-392363 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-392363 -v=7 --alsologtostderr: (36.593717982s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-392363 --wait=true -v=7 --alsologtostderr
E1205 19:19:42.987985 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:19:42.994460 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:19:43.005882 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:19:43.027316 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:19:43.068724 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:19:43.150152 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:19:43.311681 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:19:43.633574 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:19:44.275270 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:19:45.557620 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:19:48.119231 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:19:53.241452 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:20:03.483745 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:20:23.965505 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:20:43.412048 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:21:04.926860 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:21:11.116027 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-392363 --wait=true -v=7 --alsologtostderr: (2m38.522950191s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-392363
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (195.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-392363 node delete m03 -v=7 --alsologtostderr: (11.338133136s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 stop -v=7 --alsologtostderr
E1205 19:22:26.850281 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-392363 stop -v=7 --alsologtostderr: (35.335187567s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-392363 status -v=7 --alsologtostderr: exit status 7 (101.952756ms)

                                                
                                                
-- stdout --
	ha-392363
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-392363-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-392363-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:22:39.888850 1089676 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:22:39.889004 1089676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:22:39.889018 1089676 out.go:358] Setting ErrFile to fd 2...
	I1205 19:22:39.889026 1089676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:22:39.889202 1089676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
	I1205 19:22:39.889356 1089676 out.go:352] Setting JSON to false
	I1205 19:22:39.889392 1089676 mustload.go:65] Loading cluster: ha-392363
	I1205 19:22:39.889438 1089676 notify.go:220] Checking for updates...
	I1205 19:22:39.889967 1089676 config.go:182] Loaded profile config "ha-392363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:22:39.890018 1089676 status.go:174] checking status of ha-392363 ...
	I1205 19:22:39.890643 1089676 cli_runner.go:164] Run: docker container inspect ha-392363 --format={{.State.Status}}
	I1205 19:22:39.907958 1089676 status.go:371] ha-392363 host status = "Stopped" (err=<nil>)
	I1205 19:22:39.907975 1089676 status.go:384] host is not running, skipping remaining checks
	I1205 19:22:39.907981 1089676 status.go:176] ha-392363 status: &{Name:ha-392363 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 19:22:39.908010 1089676 status.go:174] checking status of ha-392363-m02 ...
	I1205 19:22:39.908269 1089676 cli_runner.go:164] Run: docker container inspect ha-392363-m02 --format={{.State.Status}}
	I1205 19:22:39.924693 1089676 status.go:371] ha-392363-m02 host status = "Stopped" (err=<nil>)
	I1205 19:22:39.924722 1089676 status.go:384] host is not running, skipping remaining checks
	I1205 19:22:39.924732 1089676 status.go:176] ha-392363-m02 status: &{Name:ha-392363-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 19:22:39.924758 1089676 status.go:174] checking status of ha-392363-m04 ...
	I1205 19:22:39.924996 1089676 cli_runner.go:164] Run: docker container inspect ha-392363-m04 --format={{.State.Status}}
	I1205 19:22:39.941070 1089676 status.go:371] ha-392363-m04 host status = "Stopped" (err=<nil>)
	I1205 19:22:39.941088 1089676 status.go:384] host is not running, skipping remaining checks
	I1205 19:22:39.941093 1089676 status.go:176] ha-392363-m04 status: &{Name:ha-392363-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (40.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-392363 --control-plane -v=7 --alsologtostderr
E1205 19:25:10.692288 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-392363 --control-plane -v=7 --alsologtostderr: (39.375753796s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-392363 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (40.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-921536 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1205 19:25:43.412795 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-921536 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (39.390088206s)
--- PASS: TestJSONOutput/start/Command (39.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-921536 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-921536 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-921536 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-921536 --output=json --user=testUser: (5.739893775s)
--- PASS: TestJSONOutput/stop/Command (5.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-603946 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-603946 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.159801ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e47d2eaa-1d38-40ca-9d57-eb4e69dc5ed7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-603946] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec67439c-c920-4895-a03d-5058a03caf29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20052"}}
	{"specversion":"1.0","id":"62650d79-fb02-4066-b11a-b008f35cb180","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9b6c073e-2f72-4962-a206-3400bb89a9ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig"}}
	{"specversion":"1.0","id":"9471f8c9-7694-4af9-af32-fa64997a8a96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube"}}
	{"specversion":"1.0","id":"973a8006-fb84-414a-8699-246843f647be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ae3ef664-b65a-4cc9-b906-8351c2941bc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"435d1c39-69b4-460c-af5d-346cfc6465c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-603946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-603946
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.16s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-601015 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-601015 --network=: (28.087994169s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-601015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-601015
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-601015: (2.055127143s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.16s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-222841 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-222841 --network=bridge: (20.381683467s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-222841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-222841
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-222841: (1.904138462s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.30s)

                                                
                                    
x
+
TestKicExistingNetwork (22.22s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1205 19:27:20.986125 1006315 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1205 19:27:21.001531 1006315 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1205 19:27:21.001588 1006315 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1205 19:27:21.001604 1006315 cli_runner.go:164] Run: docker network inspect existing-network
W1205 19:27:21.017811 1006315 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1205 19:27:21.017854 1006315 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1205 19:27:21.017872 1006315 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1205 19:27:21.018027 1006315 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1205 19:27:21.034335 1006315 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9251d5f0ef75 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:60:ab:6a:42} reservation:<nil>}
I1205 19:27:21.034834 1006315 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cff2a0}
I1205 19:27:21.034877 1006315 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1205 19:27:21.034940 1006315 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1205 19:27:21.095047 1006315 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-070354 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-070354 --network=existing-network: (20.160204704s)
helpers_test.go:175: Cleaning up "existing-network-070354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-070354
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-070354: (1.921715617s)
I1205 19:27:43.192577 1006315 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.22s)

                                                
                                    
x
+
TestKicCustomSubnet (23.72s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-477273 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-477273 --subnet=192.168.60.0/24: (22.050331607s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-477273 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-477273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-477273
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-477273: (1.656196391s)
--- PASS: TestKicCustomSubnet (23.72s)

                                                
                                    
x
+
TestKicStaticIP (25.72s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-332953 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-332953 --static-ip=192.168.200.200: (23.623406412s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-332953 ip
helpers_test.go:175: Cleaning up "static-ip-332953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-332953
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-332953: (1.976440047s)
--- PASS: TestKicStaticIP (25.72s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (48.79s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-139110 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-139110 --driver=docker  --container-runtime=crio: (20.152768896s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-164364 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-164364 --driver=docker  --container-runtime=crio: (23.432865302s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-139110
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-164364
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-164364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-164364
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-164364: (1.874583678s)
helpers_test.go:175: Cleaning up "first-139110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-139110
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-139110: (2.214341288s)
--- PASS: TestMinikubeProfile (48.79s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-727837 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-727837 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.365466634s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-727837 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-744436 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-744436 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.458447181s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-744436 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-727837 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-727837 --alsologtostderr -v=5: (1.599541965s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-744436 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-744436
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-744436: (1.170097433s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.03s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-744436
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-744436: (6.028755879s)
--- PASS: TestMountStart/serial/RestartStopped (7.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-744436 ssh -- ls /minikube-host
E1205 19:29:42.988293 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (66.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-695444 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1205 19:30:43.412714 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-695444 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m6.176730718s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (66.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695444 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695444 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-695444 -- rollout status deployment/busybox: (2.865807801s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695444 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695444 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695444 -- exec busybox-7dff88458-bzhht -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695444 -- exec busybox-7dff88458-kgb88 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695444 -- exec busybox-7dff88458-bzhht -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695444 -- exec busybox-7dff88458-kgb88 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695444 -- exec busybox-7dff88458-bzhht -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695444 -- exec busybox-7dff88458-kgb88 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695444 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695444 -- exec busybox-7dff88458-bzhht -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695444 -- exec busybox-7dff88458-bzhht -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695444 -- exec busybox-7dff88458-kgb88 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695444 -- exec busybox-7dff88458-kgb88 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-695444 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-695444 -v 3 --alsologtostderr: (25.552928032s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.13s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-695444 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 cp testdata/cp-test.txt multinode-695444:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 cp multinode-695444:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1765869458/001/cp-test_multinode-695444.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 cp multinode-695444:/home/docker/cp-test.txt multinode-695444-m02:/home/docker/cp-test_multinode-695444_multinode-695444-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444-m02 "sudo cat /home/docker/cp-test_multinode-695444_multinode-695444-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 cp multinode-695444:/home/docker/cp-test.txt multinode-695444-m03:/home/docker/cp-test_multinode-695444_multinode-695444-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444-m03 "sudo cat /home/docker/cp-test_multinode-695444_multinode-695444-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 cp testdata/cp-test.txt multinode-695444-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 cp multinode-695444-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1765869458/001/cp-test_multinode-695444-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 cp multinode-695444-m02:/home/docker/cp-test.txt multinode-695444:/home/docker/cp-test_multinode-695444-m02_multinode-695444.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444 "sudo cat /home/docker/cp-test_multinode-695444-m02_multinode-695444.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 cp multinode-695444-m02:/home/docker/cp-test.txt multinode-695444-m03:/home/docker/cp-test_multinode-695444-m02_multinode-695444-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444-m03 "sudo cat /home/docker/cp-test_multinode-695444-m02_multinode-695444-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 cp testdata/cp-test.txt multinode-695444-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 cp multinode-695444-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1765869458/001/cp-test_multinode-695444-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 cp multinode-695444-m03:/home/docker/cp-test.txt multinode-695444:/home/docker/cp-test_multinode-695444-m03_multinode-695444.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444 "sudo cat /home/docker/cp-test_multinode-695444-m03_multinode-695444.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 cp multinode-695444-m03:/home/docker/cp-test.txt multinode-695444-m02:/home/docker/cp-test_multinode-695444-m03_multinode-695444-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 ssh -n multinode-695444-m02 "sudo cat /home/docker/cp-test_multinode-695444-m03_multinode-695444-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-695444 node stop m03: (1.173444519s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-695444 status: exit status 7 (441.343359ms)

                                                
                                                
-- stdout --
	multinode-695444
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-695444-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-695444-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-695444 status --alsologtostderr: exit status 7 (444.90474ms)

                                                
                                                
-- stdout --
	multinode-695444
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-695444-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-695444-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:31:33.612499 1157056 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:31:33.612759 1157056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:31:33.612769 1157056 out.go:358] Setting ErrFile to fd 2...
	I1205 19:31:33.612773 1157056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:31:33.612970 1157056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
	I1205 19:31:33.613157 1157056 out.go:352] Setting JSON to false
	I1205 19:31:33.613182 1157056 mustload.go:65] Loading cluster: multinode-695444
	I1205 19:31:33.613226 1157056 notify.go:220] Checking for updates...
	I1205 19:31:33.613577 1157056 config.go:182] Loaded profile config "multinode-695444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:31:33.613598 1157056 status.go:174] checking status of multinode-695444 ...
	I1205 19:31:33.613987 1157056 cli_runner.go:164] Run: docker container inspect multinode-695444 --format={{.State.Status}}
	I1205 19:31:33.633392 1157056 status.go:371] multinode-695444 host status = "Running" (err=<nil>)
	I1205 19:31:33.633428 1157056 host.go:66] Checking if "multinode-695444" exists ...
	I1205 19:31:33.633703 1157056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-695444
	I1205 19:31:33.649875 1157056 host.go:66] Checking if "multinode-695444" exists ...
	I1205 19:31:33.650144 1157056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:31:33.650183 1157056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-695444
	I1205 19:31:33.666381 1157056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/multinode-695444/id_rsa Username:docker}
	I1205 19:31:33.755205 1157056 ssh_runner.go:195] Run: systemctl --version
	I1205 19:31:33.759222 1157056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:31:33.769319 1157056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:31:33.813748 1157056 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-12-05 19:31:33.805154537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:31:33.814401 1157056 kubeconfig.go:125] found "multinode-695444" server: "https://192.168.67.2:8443"
	I1205 19:31:33.814435 1157056 api_server.go:166] Checking apiserver status ...
	I1205 19:31:33.814476 1157056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:31:33.824636 1157056 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup
	I1205 19:31:33.833006 1157056 api_server.go:182] apiserver freezer: "9:freezer:/docker/304fa75d80b33637c94744fa67eca5035e383024480668a3b2da00afdad9c342/crio/crio-d64e4fd9f9317c055ba83cb32874a35110139c69546a217b0e209ddf3c4bc6f1"
	I1205 19:31:33.833058 1157056 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/304fa75d80b33637c94744fa67eca5035e383024480668a3b2da00afdad9c342/crio/crio-d64e4fd9f9317c055ba83cb32874a35110139c69546a217b0e209ddf3c4bc6f1/freezer.state
	I1205 19:31:33.840306 1157056 api_server.go:204] freezer state: "THAWED"
	I1205 19:31:33.840328 1157056 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1205 19:31:33.844029 1157056 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1205 19:31:33.844050 1157056 status.go:463] multinode-695444 apiserver status = Running (err=<nil>)
	I1205 19:31:33.844059 1157056 status.go:176] multinode-695444 status: &{Name:multinode-695444 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 19:31:33.844074 1157056 status.go:174] checking status of multinode-695444-m02 ...
	I1205 19:31:33.844304 1157056 cli_runner.go:164] Run: docker container inspect multinode-695444-m02 --format={{.State.Status}}
	I1205 19:31:33.860160 1157056 status.go:371] multinode-695444-m02 host status = "Running" (err=<nil>)
	I1205 19:31:33.860180 1157056 host.go:66] Checking if "multinode-695444-m02" exists ...
	I1205 19:31:33.860415 1157056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-695444-m02
	I1205 19:31:33.875243 1157056 host.go:66] Checking if "multinode-695444-m02" exists ...
	I1205 19:31:33.875464 1157056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:31:33.875495 1157056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-695444-m02
	I1205 19:31:33.890746 1157056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/20052-999445/.minikube/machines/multinode-695444-m02/id_rsa Username:docker}
	I1205 19:31:33.978763 1157056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:31:33.989195 1157056 status.go:176] multinode-695444-m02 status: &{Name:multinode-695444-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1205 19:31:33.989236 1157056 status.go:174] checking status of multinode-695444-m03 ...
	I1205 19:31:33.989524 1157056 cli_runner.go:164] Run: docker container inspect multinode-695444-m03 --format={{.State.Status}}
	I1205 19:31:34.006930 1157056 status.go:371] multinode-695444-m03 host status = "Stopped" (err=<nil>)
	I1205 19:31:34.006951 1157056 status.go:384] host is not running, skipping remaining checks
	I1205 19:31:34.006959 1157056 status.go:176] multinode-695444-m03 status: &{Name:multinode-695444-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.06s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-695444 node start m03 -v=7 --alsologtostderr: (8.035789436s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (93.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-695444
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-695444
E1205 19:32:06.478816 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-695444: (24.606660797s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-695444 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-695444 --wait=true -v=8 --alsologtostderr: (1m9.137463618s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-695444
--- PASS: TestMultiNode/serial/RestartKeepsNodes (93.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-695444 node delete m03: (4.668688137s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-695444 stop: (23.469070174s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-695444 status: exit status 7 (85.685242ms)

                                                
                                                
-- stdout --
	multinode-695444
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-695444-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-695444 status --alsologtostderr: exit status 7 (84.162876ms)

                                                
                                                
-- stdout --
	multinode-695444
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-695444-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:33:45.351765 1166752 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:33:45.351887 1166752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:33:45.351897 1166752 out.go:358] Setting ErrFile to fd 2...
	I1205 19:33:45.351901 1166752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:33:45.352105 1166752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
	I1205 19:33:45.352267 1166752 out.go:352] Setting JSON to false
	I1205 19:33:45.352298 1166752 mustload.go:65] Loading cluster: multinode-695444
	I1205 19:33:45.352387 1166752 notify.go:220] Checking for updates...
	I1205 19:33:45.352693 1166752 config.go:182] Loaded profile config "multinode-695444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:33:45.352713 1166752 status.go:174] checking status of multinode-695444 ...
	I1205 19:33:45.353210 1166752 cli_runner.go:164] Run: docker container inspect multinode-695444 --format={{.State.Status}}
	I1205 19:33:45.369856 1166752 status.go:371] multinode-695444 host status = "Stopped" (err=<nil>)
	I1205 19:33:45.369874 1166752 status.go:384] host is not running, skipping remaining checks
	I1205 19:33:45.369880 1166752 status.go:176] multinode-695444 status: &{Name:multinode-695444 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 19:33:45.369897 1166752 status.go:174] checking status of multinode-695444-m02 ...
	I1205 19:33:45.370157 1166752 cli_runner.go:164] Run: docker container inspect multinode-695444-m02 --format={{.State.Status}}
	I1205 19:33:45.386319 1166752 status.go:371] multinode-695444-m02 host status = "Stopped" (err=<nil>)
	I1205 19:33:45.386336 1166752 status.go:384] host is not running, skipping remaining checks
	I1205 19:33:45.386342 1166752 status.go:176] multinode-695444-m02 status: &{Name:multinode-695444-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-695444 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-695444 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (47.257291746s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695444 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-695444
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-695444-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-695444-m02 --driver=docker  --container-runtime=crio: exit status 14 (65.662275ms)

                                                
                                                
-- stdout --
	* [multinode-695444-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-695444-m02' is duplicated with machine name 'multinode-695444-m02' in profile 'multinode-695444'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-695444-m03 --driver=docker  --container-runtime=crio
E1205 19:34:42.994226 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-695444-m03 --driver=docker  --container-runtime=crio: (23.155076391s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-695444
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-695444: exit status 80 (266.841289ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-695444 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-695444-m03 already exists in multinode-695444-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-695444-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-695444-m03: (1.850732246s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.39s)

                                                
                                    
x
+
TestPreload (102.9s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-601013 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1205 19:35:43.412862 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:36:06.053793 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-601013 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m16.29763072s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-601013 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-601013 image pull gcr.io/k8s-minikube/busybox: (2.164507378s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-601013
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-601013: (5.667836912s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-601013 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-601013 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (16.556209195s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-601013 image list
helpers_test.go:175: Cleaning up "test-preload-601013" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-601013
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-601013: (1.984207283s)
--- PASS: TestPreload (102.90s)

                                                
                                    
x
+
TestScheduledStopUnix (98.73s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-966858 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-966858 --memory=2048 --driver=docker  --container-runtime=crio: (22.713843489s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-966858 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-966858 -n scheduled-stop-966858
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-966858 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1205 19:37:08.538399 1006315 retry.go:31] will retry after 115.25µs: open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/scheduled-stop-966858/pid: no such file or directory
I1205 19:37:08.539552 1006315 retry.go:31] will retry after 158.835µs: open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/scheduled-stop-966858/pid: no such file or directory
I1205 19:37:08.540671 1006315 retry.go:31] will retry after 263.141µs: open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/scheduled-stop-966858/pid: no such file or directory
I1205 19:37:08.541783 1006315 retry.go:31] will retry after 418.34µs: open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/scheduled-stop-966858/pid: no such file or directory
I1205 19:37:08.542882 1006315 retry.go:31] will retry after 667.767µs: open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/scheduled-stop-966858/pid: no such file or directory
I1205 19:37:08.543998 1006315 retry.go:31] will retry after 566.497µs: open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/scheduled-stop-966858/pid: no such file or directory
I1205 19:37:08.545114 1006315 retry.go:31] will retry after 885.24µs: open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/scheduled-stop-966858/pid: no such file or directory
I1205 19:37:08.546258 1006315 retry.go:31] will retry after 2.272295ms: open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/scheduled-stop-966858/pid: no such file or directory
I1205 19:37:08.549486 1006315 retry.go:31] will retry after 1.808162ms: open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/scheduled-stop-966858/pid: no such file or directory
I1205 19:37:08.551682 1006315 retry.go:31] will retry after 5.262157ms: open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/scheduled-stop-966858/pid: no such file or directory
I1205 19:37:08.557889 1006315 retry.go:31] will retry after 4.859933ms: open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/scheduled-stop-966858/pid: no such file or directory
I1205 19:37:08.563100 1006315 retry.go:31] will retry after 6.102677ms: open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/scheduled-stop-966858/pid: no such file or directory
I1205 19:37:08.569310 1006315 retry.go:31] will retry after 11.966785ms: open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/scheduled-stop-966858/pid: no such file or directory
I1205 19:37:08.581517 1006315 retry.go:31] will retry after 19.711742ms: open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/scheduled-stop-966858/pid: no such file or directory
I1205 19:37:08.601753 1006315 retry.go:31] will retry after 30.765109ms: open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/scheduled-stop-966858/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-966858 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-966858 -n scheduled-stop-966858
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-966858
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-966858 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-966858
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-966858: exit status 7 (69.190947ms)

                                                
                                                
-- stdout --
	scheduled-stop-966858
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-966858 -n scheduled-stop-966858
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-966858 -n scheduled-stop-966858: exit status 7 (65.164663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-966858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-966858
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-966858: (4.707559246s)
--- PASS: TestScheduledStopUnix (98.73s)

                                                
                                    
x
+
TestInsufficientStorage (9.87s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-353524 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-353524 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.551285581s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0e281dc8-d0e3-4c1a-8907-3b426b448986","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-353524] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"80dd5cfa-19a6-4ed4-81b9-ae2a5ed189df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20052"}}
	{"specversion":"1.0","id":"238970e9-ca69-471e-875e-c35077d648f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"86e5b41c-c333-4907-883a-4dcd075f5c62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig"}}
	{"specversion":"1.0","id":"eefef5ec-e682-479c-b0e7-6198c36465af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube"}}
	{"specversion":"1.0","id":"0a77ce50-6a80-4b0f-90c8-76253da1332b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a2bbf4f6-86b9-4eba-8205-334c2c8e2bf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"67a86892-0b06-4924-bb0f-4f7a47b1f7b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"91bea524-37e4-40ad-a783-b48dbf8a08e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"013707c6-f9ab-41c0-9ceb-a2183961555b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cae02f4c-3885-48d4-9afd-364036e364d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"60a3e1e4-da48-4fff-aac8-8cb9ea389f93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-353524\" primary control-plane node in \"insufficient-storage-353524\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4538e5c6-5853-48f5-aea4-86e7bfeea767","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1730888964-19917 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9735e23b-dc7d-4355-a886-ae44c2874abe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"57ff8f63-76c0-4d21-ba9f-ebfb6f8a8d8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-353524 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-353524 --output=json --layout=cluster: exit status 7 (249.666003ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-353524","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-353524","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 19:38:31.949969 1189105 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-353524" does not appear in /home/jenkins/minikube-integration/20052-999445/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-353524 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-353524 --output=json --layout=cluster: exit status 7 (250.021428ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-353524","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-353524","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 19:38:32.200345 1189205 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-353524" does not appear in /home/jenkins/minikube-integration/20052-999445/kubeconfig
	E1205 19:38:32.210151 1189205 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/insufficient-storage-353524/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-353524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-353524
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-353524: (1.822000851s)
--- PASS: TestInsufficientStorage (9.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (113.99s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1950416233 start -p running-upgrade-685524 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1950416233 start -p running-upgrade-685524 --memory=2200 --vm-driver=docker  --container-runtime=crio: (26.460495239s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-685524 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-685524 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m25.022128088s)
helpers_test.go:175: Cleaning up "running-upgrade-685524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-685524
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-685524: (2.097590404s)
--- PASS: TestRunningBinaryUpgrade (113.99s)

                                                
                                    
x
+
TestKubernetesUpgrade (341.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-401588 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1205 19:40:43.411807 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-401588 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.429344301s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-401588
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-401588: (1.184345318s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-401588 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-401588 status --format={{.Host}}: exit status 7 (66.728833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-401588 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-401588 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.190895792s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-401588 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-401588 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-401588 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (78.9759ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-401588] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-401588
	    minikube start -p kubernetes-upgrade-401588 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4015882 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-401588 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-401588 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-401588 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.70063678s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-401588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-401588
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-401588: (2.24642833s)
--- PASS: TestKubernetesUpgrade (341.96s)

                                                
                                    
x
+
TestMissingContainerUpgrade (136.99s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3156483107 start -p missing-upgrade-700838 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3156483107 start -p missing-upgrade-700838 --memory=2200 --driver=docker  --container-runtime=crio: (1m8.482060116s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-700838
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-700838: (10.517311004s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-700838
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-700838 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-700838 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (55.021039709s)
helpers_test.go:175: Cleaning up "missing-upgrade-700838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-700838
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-700838: (2.128312594s)
--- PASS: TestMissingContainerUpgrade (136.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-654238 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-654238 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (77.279667ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-654238] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (28.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-654238 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-654238 --driver=docker  --container-runtime=crio: (28.542981992s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-654238 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (28.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (103.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2462026861 start -p stopped-upgrade-679531 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2462026861 start -p stopped-upgrade-679531 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m16.842120337s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2462026861 -p stopped-upgrade-679531 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2462026861 -p stopped-upgrade-679531 stop: (2.7732998s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-679531 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-679531 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.961782493s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (103.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-654238 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-654238 --no-kubernetes --driver=docker  --container-runtime=crio: (7.87583062s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-654238 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-654238 status -o json: exit status 2 (289.329778ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-654238","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-654238
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-654238: (1.864186826s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-654238 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-654238 --no-kubernetes --driver=docker  --container-runtime=crio: (4.552652576s)
--- PASS: TestNoKubernetes/serial/Start (4.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-654238 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-654238 "sudo systemctl is-active --quiet service kubelet": exit status 1 (391.923097ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (6.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-654238
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-654238: (6.464955596s)
--- PASS: TestNoKubernetes/serial/Stop (6.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-654238 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-654238 --driver=docker  --container-runtime=crio: (7.671155787s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-654238 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-654238 "sudo systemctl is-active --quiet service kubelet": exit status 1 (328.114002ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestPause/serial/Start (44.31s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-577403 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1205 19:39:42.988859 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-577403 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (44.31222619s)
--- PASS: TestPause/serial/Start (44.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-679531
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.07s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-577403 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-577403 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.049969915s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.07s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-577403 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-577403 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-577403 --output=json --layout=cluster: exit status 2 (310.566272ms)

                                                
                                                
-- stdout --
	{"Name":"pause-577403","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-577403","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-577403 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-577403 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.56s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-577403 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-577403 --alsologtostderr -v=5: (2.563885865s)
--- PASS: TestPause/serial/DeletePaused (2.56s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.98s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.91863925s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-577403
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-577403: exit status 1 (17.757361ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-577403: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-376504 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-376504 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (257.732992ms)

                                                
                                                
-- stdout --
	* [false-376504] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:41:16.820221 1233532 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:41:16.820323 1233532 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:41:16.820330 1233532 out.go:358] Setting ErrFile to fd 2...
	I1205 19:41:16.820334 1233532 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:41:16.820519 1233532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-999445/.minikube/bin
	I1205 19:41:16.821054 1233532 out.go:352] Setting JSON to false
	I1205 19:41:16.822168 1233532 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":80628,"bootTime":1733347049,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:41:16.822238 1233532 start.go:139] virtualization: kvm guest
	I1205 19:41:16.824054 1233532 out.go:177] * [false-376504] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:41:16.825543 1233532 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:41:16.825621 1233532 notify.go:220] Checking for updates...
	I1205 19:41:16.827849 1233532 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:41:16.828961 1233532 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-999445/kubeconfig
	I1205 19:41:16.830201 1233532 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-999445/.minikube
	I1205 19:41:16.831313 1233532 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:41:16.832382 1233532 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:41:16.834894 1233532 config.go:182] Loaded profile config "force-systemd-env-532867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:41:16.835114 1233532 config.go:182] Loaded profile config "kubernetes-upgrade-401588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:41:16.835750 1233532 config.go:182] Loaded profile config "running-upgrade-685524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1205 19:41:16.835890 1233532 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:41:16.872170 1233532 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 19:41:16.872401 1233532 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:41:16.933032 1233532 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:76 SystemTime:2024-12-05 19:41:16.924231843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:41:16.933140 1233532 docker.go:318] overlay module found
	I1205 19:41:16.940229 1233532 out.go:177] * Using the docker driver based on user configuration
	I1205 19:41:16.947453 1233532 start.go:297] selected driver: docker
	I1205 19:41:16.947474 1233532 start.go:901] validating driver "docker" against <nil>
	I1205 19:41:16.947492 1233532 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:41:17.009008 1233532 out.go:201] 
	W1205 19:41:17.010684 1233532 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1205 19:41:17.012106 1233532 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-376504 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-376504

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-376504

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-376504

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-376504

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-376504

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-376504

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-376504

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-376504

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-376504

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-376504

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-376504

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-376504" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-376504" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Dec 2024 19:41:13 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.94.2:8443
name: force-systemd-env-532867
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Dec 2024 19:41:11 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-401588
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Dec 2024 19:41:17 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: running-upgrade-685524
contexts:
- context:
cluster: force-systemd-env-532867
extensions:
- extension:
last-update: Thu, 05 Dec 2024 19:41:13 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: force-systemd-env-532867
name: force-systemd-env-532867
- context:
cluster: kubernetes-upgrade-401588
user: kubernetes-upgrade-401588
name: kubernetes-upgrade-401588
- context:
cluster: running-upgrade-685524
extensions:
- extension:
last-update: Thu, 05 Dec 2024 19:41:17 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: running-upgrade-685524
name: running-upgrade-685524
current-context: running-upgrade-685524
kind: Config
preferences: {}
users:
- name: force-systemd-env-532867
user:
client-certificate: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/force-systemd-env-532867/client.crt
client-key: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/force-systemd-env-532867/client.key
- name: kubernetes-upgrade-401588
user:
client-certificate: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/kubernetes-upgrade-401588/client.crt
client-key: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/kubernetes-upgrade-401588/client.key
- name: running-upgrade-685524
user:
client-certificate: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/running-upgrade-685524/client.crt
client-key: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/running-upgrade-685524/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-376504

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376504"

                                                
                                                
----------------------- debugLogs end: false-376504 [took: 3.665137301s] --------------------------------
helpers_test.go:175: Cleaning up "false-376504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-376504
--- PASS: TestNetworkPlugins/group/false (4.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (159.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-163957 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-163957 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m39.913006328s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (159.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-450473 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-450473 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (54.606724247s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-450473 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3bd7c70a-44fc-416c-ab72-91a8a3763148] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3bd7c70a-44fc-416c-ab72-91a8a3763148] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004270561s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-450473 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-450473 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-450473 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-450473 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-450473 --alsologtostderr -v=3: (11.855918796s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-450473 -n no-preload-450473
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-450473 -n no-preload-450473: exit status 7 (69.6434ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-450473 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-450473 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-450473 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m22.119042732s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-450473 -n no-preload-450473
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-163957 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d9699057-59ac-43fd-9f87-5417956fa301] Pending
helpers_test.go:344: "busybox" [d9699057-59ac-43fd-9f87-5417956fa301] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d9699057-59ac-43fd-9f87-5417956fa301] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003431123s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-163957 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-163957 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-163957 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-163957 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-163957 --alsologtostderr -v=3: (12.046708615s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163957 -n old-k8s-version-163957
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163957 -n old-k8s-version-163957: exit status 7 (98.357041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-163957 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (138.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-163957 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-163957 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m18.411590237s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163957 -n old-k8s-version-163957
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (138.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (44.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-304567 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-304567 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (44.966082129s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (44.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-304567 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7ebf5c84-3141-402b-b97f-672d1bcf896d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1205 19:45:43.412536 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/addons-792804/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [7ebf5c84-3141-402b-b97f-672d1bcf896d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003674294s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-304567 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-304567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-304567 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-304567 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-304567 --alsologtostderr -v=3: (11.839737924s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-304567 -n embed-certs-304567
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-304567 -n embed-certs-304567: exit status 7 (79.772981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-304567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-304567 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-304567 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m23.123994094s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-304567 -n embed-certs-304567
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-081396 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-081396 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (40.629530264s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-081396 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [524ebfe9-67b6-4d99-b51b-1853f65f2ebe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [524ebfe9-67b6-4d99-b51b-1853f65f2ebe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004392512s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-081396 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-081396 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-081396 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-081396 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-081396 --alsologtostderr -v=3: (11.834460913s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-081396 -n default-k8s-diff-port-081396
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-081396 -n default-k8s-diff-port-081396: exit status 7 (73.987339ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-081396 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (273.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-081396 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-081396 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m33.560384351s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-081396 -n default-k8s-diff-port-081396
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (273.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pzf4w" [ea21a264-8342-43af-8e17-4d94d870663f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003548757s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pzf4w" [ea21a264-8342-43af-8e17-4d94d870663f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003789488s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-163957 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-163957 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-163957 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163957 -n old-k8s-version-163957
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163957 -n old-k8s-version-163957: exit status 2 (295.284195ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-163957 -n old-k8s-version-163957
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-163957 -n old-k8s-version-163957: exit status 2 (314.264638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-163957 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163957 -n old-k8s-version-163957
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-163957 -n old-k8s-version-163957
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-005302 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-005302 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (28.301273723s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mtqgd" [485b8f05-a14c-464b-adf0-1612c1101b13] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003808133s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mtqgd" [485b8f05-a14c-464b-adf0-1612c1101b13] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00382808s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-450473 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-450473 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-450473 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-450473 -n no-preload-450473
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-450473 -n no-preload-450473: exit status 2 (316.819745ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-450473 -n no-preload-450473
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-450473 -n no-preload-450473: exit status 2 (299.324227ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-450473 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-450473 -n no-preload-450473
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-450473 -n no-preload-450473
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-376504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-376504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.330971865s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-005302 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-005302 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.342583597s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-005302 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-005302 --alsologtostderr -v=3: (1.208551472s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-005302 -n newest-cni-005302
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-005302 -n newest-cni-005302: exit status 7 (82.374636ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-005302 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-005302 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-005302 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (13.063478639s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-005302 -n newest-cni-005302
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-005302 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-005302 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-005302 -n newest-cni-005302
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-005302 -n newest-cni-005302: exit status 2 (293.047728ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-005302 -n newest-cni-005302
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-005302 -n newest-cni-005302: exit status 2 (291.256623ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-005302 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-005302 -n newest-cni-005302
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-005302 -n newest-cni-005302
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (39.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-376504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-376504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (39.659896013s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (39.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-376504 "pgrep -a kubelet"
I1205 19:48:31.352226 1006315 config.go:182] Loaded profile config "auto-376504": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-376504 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c457g" [66633fec-48c2-4d90-8bff-9a38e3477246] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c457g" [66633fec-48c2-4d90-8bff-9a38e3477246] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003337385s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-376504 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-376504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-376504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-z6jdc" [25ade4c7-703c-4fae-aafb-b5e79acaa1c5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003914774s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (56.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-376504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-376504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (56.745370611s)
--- PASS: TestNetworkPlugins/group/calico/Start (56.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-376504 "pgrep -a kubelet"
I1205 19:49:01.759066 1006315 config.go:182] Loaded profile config "kindnet-376504": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-376504 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wmxkf" [74241a9a-e393-459f-b886-35792005ff8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wmxkf" [74241a9a-e393-459f-b886-35792005ff8f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004288248s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-376504 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-376504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-376504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (47.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-376504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1205 19:49:33.005273 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/old-k8s-version-163957/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:49:38.127066 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/old-k8s-version-163957/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:49:42.988757 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/functional-418616/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:49:48.368444 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/old-k8s-version-163957/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-376504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (47.737756002s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (47.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-v4f5l" [18225d96-746f-41e2-a112-65f2d7083acd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004187833s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-376504 "pgrep -a kubelet"
I1205 19:50:02.561930 1006315 config.go:182] Loaded profile config "calico-376504": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-376504 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rtlx8" [ea966342-9252-4c0a-9ea9-3101e4a1e594] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rtlx8" [ea966342-9252-4c0a-9ea9-3101e4a1e594] Running
E1205 19:50:08.849915 1006315 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/old-k8s-version-163957/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003989398s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-376504 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-376504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-376504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-376504 "pgrep -a kubelet"
I1205 19:50:19.081250 1006315 config.go:182] Loaded profile config "custom-flannel-376504": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-376504 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hjldx" [6c8f40dc-9bbc-4697-9184-b4a66d523fa8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hjldx" [6c8f40dc-9bbc-4697-9184-b4a66d523fa8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003695145s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5npmh" [34fdb84c-b866-42a7-ab5b-3bfe91ae7718] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004316229s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-376504 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-376504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-376504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (44.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-376504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-376504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (44.209580017s)
--- PASS: TestNetworkPlugins/group/flannel/Start (44.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5npmh" [34fdb84c-b866-42a7-ab5b-3bfe91ae7718] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004759745s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-304567 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-304567 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-304567 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-304567 -n embed-certs-304567
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-304567 -n embed-certs-304567: exit status 2 (311.443736ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-304567 -n embed-certs-304567
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-304567 -n embed-certs-304567: exit status 2 (307.30838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-304567 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-304567 -n embed-certs-304567
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-304567 -n embed-certs-304567
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-376504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-376504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m3.744450102s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-376504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-376504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m4.112437407s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-hhf8x" [cfdafe54-3554-47f3-82d1-778f6c61b7e9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004080208s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-376504 "pgrep -a kubelet"
I1205 19:51:22.643098 1006315 config.go:182] Loaded profile config "flannel-376504": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-376504 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-68vg9" [b03db885-0519-41be-9e84-39e47335cc66] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-68vg9" [b03db885-0519-41be-9e84-39e47335cc66] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004516244s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-376504 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-376504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-376504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gnlml" [d4595649-d21f-40ae-b8cb-30206f001781] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003960004s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gnlml" [d4595649-d21f-40ae-b8cb-30206f001781] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003553053s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-081396 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-376504 "pgrep -a kubelet"
I1205 19:51:47.952085 1006315 config.go:182] Loaded profile config "bridge-376504": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-376504 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d4qwx" [925b39fc-411b-4e37-835a-208b5dc1eb2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-d4qwx" [925b39fc-411b-4e37-835a-208b5dc1eb2e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004490086s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-081396 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-081396 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-081396 -n default-k8s-diff-port-081396
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-081396 -n default-k8s-diff-port-081396: exit status 2 (282.7754ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-081396 -n default-k8s-diff-port-081396
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-081396 -n default-k8s-diff-port-081396: exit status 2 (284.156013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-081396 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-081396 -n default-k8s-diff-port-081396
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-081396 -n default-k8s-diff-port-081396
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-376504 "pgrep -a kubelet"
I1205 19:51:54.569596 1006315 config.go:182] Loaded profile config "enable-default-cni-376504": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-376504 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hv72p" [297bd457-6f39-45f1-b53e-a4af1b767878] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hv72p" [297bd457-6f39-45f1-b53e-a4af1b767878] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004295672s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-376504 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-376504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-376504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-376504 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-376504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-376504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    

Test skip (26/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-792804 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-882784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-882784
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-376504 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-376504

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-376504

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-376504

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-376504

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-376504

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-376504

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-376504

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-376504

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-376504

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-376504

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-376504

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-376504" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-376504" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Dec 2024 19:41:13 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.94.2:8443
name: force-systemd-env-532867
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Dec 2024 19:41:11 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-401588
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Dec 2024 19:40:10 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: running-upgrade-685524
contexts:
- context:
cluster: force-systemd-env-532867
extensions:
- extension:
last-update: Thu, 05 Dec 2024 19:41:13 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: force-systemd-env-532867
name: force-systemd-env-532867
- context:
cluster: kubernetes-upgrade-401588
user: kubernetes-upgrade-401588
name: kubernetes-upgrade-401588
- context:
cluster: running-upgrade-685524
user: running-upgrade-685524
name: running-upgrade-685524
current-context: force-systemd-env-532867
kind: Config
preferences: {}
users:
- name: force-systemd-env-532867
user:
client-certificate: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/force-systemd-env-532867/client.crt
client-key: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/force-systemd-env-532867/client.key
- name: kubernetes-upgrade-401588
user:
client-certificate: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/kubernetes-upgrade-401588/client.crt
client-key: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/kubernetes-upgrade-401588/client.key
- name: running-upgrade-685524
user:
client-certificate: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/running-upgrade-685524/client.crt
client-key: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/running-upgrade-685524/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-376504

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376504"

                                                
                                                
----------------------- debugLogs end: kubenet-376504 [took: 3.390439004s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-376504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-376504
--- SKIP: TestNetworkPlugins/group/kubenet (3.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
I1205 19:41:21.067072 1006315 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 19:41:21.067170 1006315 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1205 19:41:21.101374 1006315 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1205 19:41:21.101406 1006315 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1205 19:41:21.101485 1006315 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1205 19:41:21.101520 1006315 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3744952751/002/docker-machine-driver-kvm2
I1205 19:41:21.124561 1006315 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3744952751/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020] Decompressors:map[bz2:0xc000015ce0 gz:0xc000015ce8 tar:0xc000015a90 tar.bz2:0xc000015ae0 tar.gz:0xc000015b20 tar.xz:0xc000015b30 tar.zst:0xc000015c90 tbz2:0xc000015ae0 tgz:0xc000015b20 txz:0xc000015b30 tzst:0xc000015c90 xz:0xc000015cf0 zip:0xc000015d00 zst:0xc000015cf8] Getters:map[file:0xc001db7440 http:0xc00077de50 https:0xc00077dea0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1205 19:41:21.124618 1006315 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3744952751/002/docker-machine-driver-kvm2
panic.go:629: 
----------------------- debugLogs start: cilium-376504 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-376504" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-376504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-376504" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20052-999445/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Dec 2024 19:41:11 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-401588
contexts:
- context:
cluster: kubernetes-upgrade-401588
user: kubernetes-upgrade-401588
name: kubernetes-upgrade-401588
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-401588
user:
client-certificate: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/kubernetes-upgrade-401588/client.crt
client-key: /home/jenkins/minikube-integration/20052-999445/.minikube/profiles/kubernetes-upgrade-401588/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-376504

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-376504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376504"

                                                
                                                
----------------------- debugLogs end: cilium-376504 [took: 4.549185717s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-376504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-376504
--- SKIP: TestNetworkPlugins/group/cilium (4.76s)

                                                
                                    
Copied to clipboard